Test Report: KVM_Linux_containerd 17957

                    
                      89df817c127b40a78141e8021123a5a55115ceb7:2024-01-15:32713
                    
                

Test fail (1/318)

Order failed test Duration
42 TestAddons/parallel/HelmTiller 17.41
x
+
TestAddons/parallel/HelmTiller (17.41s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 3.880061ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-cmkcx" [3f460238-8c55-491b-a343-20d0169c76f8] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.00515656s
addons_test.go:473: (dbg) Run:  kubectl --context addons-431563 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-431563 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.511026426s)
addons_test.go:478: kubectl --context addons-431563 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:473: (dbg) Run:  kubectl --context addons-431563 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-431563 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (2.811003329s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-431563 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:490: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-431563 addons disable helm-tiller --alsologtostderr -v=1: exit status 11 (537.937231ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 11:40:43.821235  213692 out.go:296] Setting OutFile to fd 1 ...
	I0115 11:40:43.821375  213692 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:40:43.821387  213692 out.go:309] Setting ErrFile to fd 2...
	I0115 11:40:43.821395  213692 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:40:43.821573  213692 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-203994/.minikube/bin
	I0115 11:40:43.821836  213692 mustload.go:65] Loading cluster: addons-431563
	I0115 11:40:43.822160  213692 config.go:182] Loaded profile config "addons-431563": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 11:40:43.822183  213692 addons.go:597] checking whether the cluster is paused
	I0115 11:40:43.822269  213692 config.go:182] Loaded profile config "addons-431563": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 11:40:43.822281  213692 host.go:66] Checking if "addons-431563" exists ...
	I0115 11:40:43.822679  213692 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:40:43.822727  213692 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:40:43.839279  213692 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34467
	I0115 11:40:43.839744  213692 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:40:43.840340  213692 main.go:141] libmachine: Using API Version  1
	I0115 11:40:43.840364  213692 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:40:43.840755  213692 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:40:43.840988  213692 main.go:141] libmachine: (addons-431563) Calling .GetState
	I0115 11:40:43.842552  213692 main.go:141] libmachine: (addons-431563) Calling .DriverName
	I0115 11:40:43.842806  213692 ssh_runner.go:195] Run: systemctl --version
	I0115 11:40:43.842844  213692 main.go:141] libmachine: (addons-431563) Calling .GetSSHHostname
	I0115 11:40:43.845579  213692 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:40:43.846004  213692 main.go:141] libmachine: (addons-431563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6a:05", ip: ""} in network mk-addons-431563: {Iface:virbr1 ExpiryTime:2024-01-15 12:37:57 +0000 UTC Type:0 Mac:52:54:00:0f:6a:05 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-431563 Clientid:01:52:54:00:0f:6a:05}
	I0115 11:40:43.846051  213692 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined IP address 192.168.39.212 and MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:40:43.846136  213692 main.go:141] libmachine: (addons-431563) Calling .GetSSHPort
	I0115 11:40:43.846329  213692 main.go:141] libmachine: (addons-431563) Calling .GetSSHKeyPath
	I0115 11:40:43.846514  213692 main.go:141] libmachine: (addons-431563) Calling .GetSSHUsername
	I0115 11:40:43.846662  213692 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17957-203994/.minikube/machines/addons-431563/id_rsa Username:docker}
	I0115 11:40:43.945071  213692 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0115 11:40:43.945183  213692 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 11:40:44.052120  213692 cri.go:89] found id: "f1b898b199f0a8127511547436d568ed979639bcb89a6f533cfd9de40bc0f5c4"
	I0115 11:40:44.052154  213692 cri.go:89] found id: "129f51a4bc1c42da98296d7afb4bda27732cc386b1e65fafa42a1adf1944e3a7"
	I0115 11:40:44.052161  213692 cri.go:89] found id: "849fac676ad125ed61baf80cc7b8ee9068f99c0b0b70d50809374b9adf5dbe86"
	I0115 11:40:44.052167  213692 cri.go:89] found id: "c1fe175bb7545b8a49546544be9aa181c93012cc0efdd71c009b309924db824b"
	I0115 11:40:44.052173  213692 cri.go:89] found id: "3b61d9e62fc55d0c713d0bedfa5bfc5fabe2cb7253be4a5febaa83f829e70df5"
	I0115 11:40:44.052190  213692 cri.go:89] found id: "0f0d3dbfec5d765700a1e85444d2825dadf189e68d355cae8c75112fcb323833"
	I0115 11:40:44.052195  213692 cri.go:89] found id: "2b4fb8338de86fc7bec9bd409930f8ee0c01ad02d21e0d68434c19f29da4fcd8"
	I0115 11:40:44.052200  213692 cri.go:89] found id: "729ebb2175ddd3ebfb02fdfacf7334f92eae4a81c5fe8f2a43730c3443a3019f"
	I0115 11:40:44.052205  213692 cri.go:89] found id: "e9f8bc8b33303366e0106efcaf6f32df833f370b9172c5bf1884e55c9a1c000e"
	I0115 11:40:44.052229  213692 cri.go:89] found id: "4671e5f9ad61896404156120444edd0a89344f2e85cd564f4590210fbba779cb"
	I0115 11:40:44.052239  213692 cri.go:89] found id: "b0dba84ef9eea9fde34ee9844203137f459712a4a391728893ed1063b2ec9cb3"
	I0115 11:40:44.052245  213692 cri.go:89] found id: "aae5b527406e321c80c61039a758b3d380c8053aaca7e1bfdb281cc90f2e3b42"
	I0115 11:40:44.052251  213692 cri.go:89] found id: "4fb4c1511e1a603d670c5d6d97cbfc73c780c11d9b0174b8fd11022854b2ae1c"
	I0115 11:40:44.052262  213692 cri.go:89] found id: "9751d9aa40631ce1bf4c3f6867937398bb65fbcdec7bd91ec4a14761b18ea566"
	I0115 11:40:44.052271  213692 cri.go:89] found id: "8531fe16028b270246f5fb00bf09e9fade6a8d660b7e95853af005632267fafa"
	I0115 11:40:44.052276  213692 cri.go:89] found id: "76392270d1891588436d8277f1521f0469415bc0f3e70dd37777fd4e6387bda7"
	I0115 11:40:44.052295  213692 cri.go:89] found id: "f91a4522dff1db809549f25be555e02295dd128b66dc88798c9e7595f4a0815f"
	I0115 11:40:44.052301  213692 cri.go:89] found id: "4c6454d2325632e3cb46d8745000040d576fea6b6e01548cb601533a3fc29daa"
	I0115 11:40:44.052305  213692 cri.go:89] found id: "50a7c9e4a2673b35ac0a0e7817090730fad5cedab8f958d64029a25c07ef3c4e"
	I0115 11:40:44.052311  213692 cri.go:89] found id: "6e7811bd3d2d5dc76419629877a3959acd3134772bc896da92b702b595edef2b"
	I0115 11:40:44.052316  213692 cri.go:89] found id: ""
	I0115 11:40:44.052380  213692 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0115 11:40:44.272591  213692 main.go:141] libmachine: Making call to close driver server
	I0115 11:40:44.272621  213692 main.go:141] libmachine: (addons-431563) Calling .Close
	I0115 11:40:44.272995  213692 main.go:141] libmachine: Successfully made call to close driver server
	I0115 11:40:44.273019  213692 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 11:40:44.272996  213692 main.go:141] libmachine: (addons-431563) DBG | Closing plugin on server side
	I0115 11:40:44.275282  213692 out.go:177] 
	W0115 11:40:44.276524  213692 out.go:239] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-01-15T11:40:44Z" level=error msg="stat /run/containerd/runc/k8s.io/bce3afcbc66d8ff24f5112b9093f9a82d575207ae9cecfe83968b8fe3d2300a9: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-01-15T11:40:44Z" level=error msg="stat /run/containerd/runc/k8s.io/bce3afcbc66d8ff24f5112b9093f9a82d575207ae9cecfe83968b8fe3d2300a9: no such file or directory"
	
	W0115 11:40:44.276546  213692 out.go:239] * 
	* 
	W0115 11:40:44.282204  213692 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_6f112806b36003b4c7cc9d1475fa654343463182_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_6f112806b36003b4c7cc9d1475fa654343463182_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0115 11:40:44.284070  213692 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:492: failed disabling helm-tiller addon. arg "out/minikube-linux-amd64 -p addons-431563 addons disable helm-tiller --alsologtostderr -v=1".s exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-431563 -n addons-431563
helpers_test.go:244: <<< TestAddons/parallel/HelmTiller FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/HelmTiller]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-431563 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-431563 logs -n 25: (1.355257518s)
helpers_test.go:252: TestAddons/parallel/HelmTiller logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-339684                                                                     | download-only-339684 | jenkins | v1.32.0 | 15 Jan 24 11:37 UTC | 15 Jan 24 11:37 UTC |
	| start   | -o=json --download-only                                                                     | download-only-004731 | jenkins | v1.32.0 | 15 Jan 24 11:37 UTC |                     |
	|         | -p download-only-004731                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 15 Jan 24 11:37 UTC | 15 Jan 24 11:37 UTC |
	| delete  | -p download-only-004731                                                                     | download-only-004731 | jenkins | v1.32.0 | 15 Jan 24 11:37 UTC | 15 Jan 24 11:37 UTC |
	| start   | -o=json --download-only                                                                     | download-only-990313 | jenkins | v1.32.0 | 15 Jan 24 11:37 UTC |                     |
	|         | -p download-only-990313                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 15 Jan 24 11:37 UTC | 15 Jan 24 11:37 UTC |
	| delete  | -p download-only-990313                                                                     | download-only-990313 | jenkins | v1.32.0 | 15 Jan 24 11:37 UTC | 15 Jan 24 11:37 UTC |
	| delete  | -p download-only-339684                                                                     | download-only-339684 | jenkins | v1.32.0 | 15 Jan 24 11:37 UTC | 15 Jan 24 11:37 UTC |
	| delete  | -p download-only-004731                                                                     | download-only-004731 | jenkins | v1.32.0 | 15 Jan 24 11:37 UTC | 15 Jan 24 11:37 UTC |
	| delete  | -p download-only-990313                                                                     | download-only-990313 | jenkins | v1.32.0 | 15 Jan 24 11:37 UTC | 15 Jan 24 11:37 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-871944 | jenkins | v1.32.0 | 15 Jan 24 11:37 UTC |                     |
	|         | binary-mirror-871944                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:42239                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-871944                                                                     | binary-mirror-871944 | jenkins | v1.32.0 | 15 Jan 24 11:37 UTC | 15 Jan 24 11:37 UTC |
	| addons  | disable dashboard -p                                                                        | addons-431563        | jenkins | v1.32.0 | 15 Jan 24 11:37 UTC |                     |
	|         | addons-431563                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-431563        | jenkins | v1.32.0 | 15 Jan 24 11:37 UTC |                     |
	|         | addons-431563                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-431563 --wait=true                                                                | addons-431563        | jenkins | v1.32.0 | 15 Jan 24 11:37 UTC | 15 Jan 24 11:40 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-431563 ssh cat                                                                       | addons-431563        | jenkins | v1.32.0 | 15 Jan 24 11:40 UTC | 15 Jan 24 11:40 UTC |
	|         | /opt/local-path-provisioner/pvc-d540bbe1-6ee4-4fa1-a0cc-5bdcb43e6364_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-431563 addons disable                                                                | addons-431563        | jenkins | v1.32.0 | 15 Jan 24 11:40 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-431563        | jenkins | v1.32.0 | 15 Jan 24 11:40 UTC | 15 Jan 24 11:40 UTC |
	|         | addons-431563                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-431563        | jenkins | v1.32.0 | 15 Jan 24 11:40 UTC | 15 Jan 24 11:40 UTC |
	|         | -p addons-431563                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-431563 ip                                                                            | addons-431563        | jenkins | v1.32.0 | 15 Jan 24 11:40 UTC | 15 Jan 24 11:40 UTC |
	| addons  | addons-431563 addons disable                                                                | addons-431563        | jenkins | v1.32.0 | 15 Jan 24 11:40 UTC | 15 Jan 24 11:40 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-431563 addons                                                                        | addons-431563        | jenkins | v1.32.0 | 15 Jan 24 11:40 UTC | 15 Jan 24 11:40 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-431563        | jenkins | v1.32.0 | 15 Jan 24 11:40 UTC | 15 Jan 24 11:40 UTC |
	|         | -p addons-431563                                                                            |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-431563        | jenkins | v1.32.0 | 15 Jan 24 11:40 UTC |                     |
	|         | addons-431563                                                                               |                      |         |         |                     |                     |
	| addons  | addons-431563 addons disable                                                                | addons-431563        | jenkins | v1.32.0 | 15 Jan 24 11:40 UTC |                     |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 11:37:40
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 11:37:40.673252  212092 out.go:296] Setting OutFile to fd 1 ...
	I0115 11:37:40.673511  212092 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:37:40.673522  212092 out.go:309] Setting ErrFile to fd 2...
	I0115 11:37:40.673526  212092 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:37:40.673731  212092 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-203994/.minikube/bin
	I0115 11:37:40.674342  212092 out.go:303] Setting JSON to false
	I0115 11:37:40.675181  212092 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":15613,"bootTime":1705303048,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 11:37:40.675245  212092 start.go:138] virtualization: kvm guest
	I0115 11:37:40.677510  212092 out.go:177] * [addons-431563] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 11:37:40.678951  212092 notify.go:220] Checking for updates...
	I0115 11:37:40.678955  212092 out.go:177]   - MINIKUBE_LOCATION=17957
	I0115 11:37:40.680303  212092 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 11:37:40.681724  212092 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17957-203994/kubeconfig
	I0115 11:37:40.682943  212092 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-203994/.minikube
	I0115 11:37:40.684243  212092 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0115 11:37:40.685556  212092 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 11:37:40.687029  212092 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 11:37:40.718862  212092 out.go:177] * Using the kvm2 driver based on user configuration
	I0115 11:37:40.720136  212092 start.go:298] selected driver: kvm2
	I0115 11:37:40.720150  212092 start.go:902] validating driver "kvm2" against <nil>
	I0115 11:37:40.720160  212092 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 11:37:40.720890  212092 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 11:37:40.720982  212092 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17957-203994/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0115 11:37:40.735739  212092 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0115 11:37:40.735839  212092 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0115 11:37:40.736067  212092 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0115 11:37:40.736147  212092 cni.go:84] Creating CNI manager for ""
	I0115 11:37:40.736164  212092 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0115 11:37:40.736179  212092 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0115 11:37:40.736192  212092 start_flags.go:321] config:
	{Name:addons-431563 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-431563 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 11:37:40.736363  212092 iso.go:125] acquiring lock: {Name:mk7bc47681a8ce0f0bd494ddfd59b43adf8a6e55 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 11:37:40.738287  212092 out.go:177] * Starting control plane node addons-431563 in cluster addons-431563
	I0115 11:37:40.739518  212092 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0115 11:37:40.739550  212092 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17957-203994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I0115 11:37:40.739558  212092 cache.go:56] Caching tarball of preloaded images
	I0115 11:37:40.739677  212092 preload.go:174] Found /home/jenkins/minikube-integration/17957-203994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0115 11:37:40.739700  212092 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I0115 11:37:40.740011  212092 profile.go:148] Saving config to /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/config.json ...
	I0115 11:37:40.740037  212092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/config.json: {Name:mk0cefadbbd05a58ea70ff12d7fa292659173dd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:37:40.740205  212092 start.go:365] acquiring machines lock for addons-431563: {Name:mk6080771db37bb1fb38e7cb24744ec018d8d487 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0115 11:37:40.740250  212092 start.go:369] acquired machines lock for "addons-431563" in 32.265µs
	I0115 11:37:40.740269  212092 start.go:93] Provisioning new machine with config: &{Name:addons-431563 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:addons-431563 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0115 11:37:40.740333  212092 start.go:125] createHost starting for "" (driver="kvm2")
	I0115 11:37:40.741965  212092 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0115 11:37:40.742115  212092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:37:40.742167  212092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:37:40.755990  212092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40779
	I0115 11:37:40.756487  212092 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:37:40.757095  212092 main.go:141] libmachine: Using API Version  1
	I0115 11:37:40.757117  212092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:37:40.757486  212092 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:37:40.757689  212092 main.go:141] libmachine: (addons-431563) Calling .GetMachineName
	I0115 11:37:40.757872  212092 main.go:141] libmachine: (addons-431563) Calling .DriverName
	I0115 11:37:40.758039  212092 start.go:159] libmachine.API.Create for "addons-431563" (driver="kvm2")
	I0115 11:37:40.758070  212092 client.go:168] LocalClient.Create starting
	I0115 11:37:40.758104  212092 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17957-203994/.minikube/certs/ca.pem
	I0115 11:37:40.935808  212092 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17957-203994/.minikube/certs/cert.pem
	I0115 11:37:41.219747  212092 main.go:141] libmachine: Running pre-create checks...
	I0115 11:37:41.219773  212092 main.go:141] libmachine: (addons-431563) Calling .PreCreateCheck
	I0115 11:37:41.220336  212092 main.go:141] libmachine: (addons-431563) Calling .GetConfigRaw
	I0115 11:37:41.220824  212092 main.go:141] libmachine: Creating machine...
	I0115 11:37:41.220842  212092 main.go:141] libmachine: (addons-431563) Calling .Create
	I0115 11:37:41.221081  212092 main.go:141] libmachine: (addons-431563) Creating KVM machine...
	I0115 11:37:41.222448  212092 main.go:141] libmachine: (addons-431563) DBG | found existing default KVM network
	I0115 11:37:41.223340  212092 main.go:141] libmachine: (addons-431563) DBG | I0115 11:37:41.223156  212114 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f210}
	I0115 11:37:41.228649  212092 main.go:141] libmachine: (addons-431563) DBG | trying to create private KVM network mk-addons-431563 192.168.39.0/24...
	I0115 11:37:41.301633  212092 main.go:141] libmachine: (addons-431563) DBG | private KVM network mk-addons-431563 192.168.39.0/24 created
	I0115 11:37:41.301689  212092 main.go:141] libmachine: (addons-431563) DBG | I0115 11:37:41.301606  212114 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17957-203994/.minikube
	I0115 11:37:41.301719  212092 main.go:141] libmachine: (addons-431563) Setting up store path in /home/jenkins/minikube-integration/17957-203994/.minikube/machines/addons-431563 ...
	I0115 11:37:41.301744  212092 main.go:141] libmachine: (addons-431563) Building disk image from file:///home/jenkins/minikube-integration/17957-203994/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0115 11:37:41.301764  212092 main.go:141] libmachine: (addons-431563) Downloading /home/jenkins/minikube-integration/17957-203994/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17957-203994/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0115 11:37:41.515877  212092 main.go:141] libmachine: (addons-431563) DBG | I0115 11:37:41.515649  212114 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17957-203994/.minikube/machines/addons-431563/id_rsa...
	I0115 11:37:41.592567  212092 main.go:141] libmachine: (addons-431563) DBG | I0115 11:37:41.592420  212114 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17957-203994/.minikube/machines/addons-431563/addons-431563.rawdisk...
	I0115 11:37:41.592601  212092 main.go:141] libmachine: (addons-431563) DBG | Writing magic tar header
	I0115 11:37:41.592619  212092 main.go:141] libmachine: (addons-431563) DBG | Writing SSH key tar header
	I0115 11:37:41.592634  212092 main.go:141] libmachine: (addons-431563) DBG | I0115 11:37:41.592530  212114 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17957-203994/.minikube/machines/addons-431563 ...
	I0115 11:37:41.592651  212092 main.go:141] libmachine: (addons-431563) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17957-203994/.minikube/machines/addons-431563
	I0115 11:37:41.592666  212092 main.go:141] libmachine: (addons-431563) Setting executable bit set on /home/jenkins/minikube-integration/17957-203994/.minikube/machines/addons-431563 (perms=drwx------)
	I0115 11:37:41.592682  212092 main.go:141] libmachine: (addons-431563) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17957-203994/.minikube/machines
	I0115 11:37:41.592696  212092 main.go:141] libmachine: (addons-431563) Setting executable bit set on /home/jenkins/minikube-integration/17957-203994/.minikube/machines (perms=drwxr-xr-x)
	I0115 11:37:41.592709  212092 main.go:141] libmachine: (addons-431563) Setting executable bit set on /home/jenkins/minikube-integration/17957-203994/.minikube (perms=drwxr-xr-x)
	I0115 11:37:41.592717  212092 main.go:141] libmachine: (addons-431563) Setting executable bit set on /home/jenkins/minikube-integration/17957-203994 (perms=drwxrwxr-x)
	I0115 11:37:41.592726  212092 main.go:141] libmachine: (addons-431563) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0115 11:37:41.592740  212092 main.go:141] libmachine: (addons-431563) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17957-203994/.minikube
	I0115 11:37:41.592752  212092 main.go:141] libmachine: (addons-431563) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0115 11:37:41.592774  212092 main.go:141] libmachine: (addons-431563) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17957-203994
	I0115 11:37:41.592795  212092 main.go:141] libmachine: (addons-431563) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0115 11:37:41.592802  212092 main.go:141] libmachine: (addons-431563) Creating domain...
	I0115 11:37:41.592816  212092 main.go:141] libmachine: (addons-431563) DBG | Checking permissions on dir: /home/jenkins
	I0115 11:37:41.592826  212092 main.go:141] libmachine: (addons-431563) DBG | Checking permissions on dir: /home
	I0115 11:37:41.592839  212092 main.go:141] libmachine: (addons-431563) DBG | Skipping /home - not owner
	I0115 11:37:41.594097  212092 main.go:141] libmachine: (addons-431563) define libvirt domain using xml: 
	I0115 11:37:41.594161  212092 main.go:141] libmachine: (addons-431563) <domain type='kvm'>
	I0115 11:37:41.594189  212092 main.go:141] libmachine: (addons-431563)   <name>addons-431563</name>
	I0115 11:37:41.594203  212092 main.go:141] libmachine: (addons-431563)   <memory unit='MiB'>4000</memory>
	I0115 11:37:41.594241  212092 main.go:141] libmachine: (addons-431563)   <vcpu>2</vcpu>
	I0115 11:37:41.594267  212092 main.go:141] libmachine: (addons-431563)   <features>
	I0115 11:37:41.594278  212092 main.go:141] libmachine: (addons-431563)     <acpi/>
	I0115 11:37:41.594291  212092 main.go:141] libmachine: (addons-431563)     <apic/>
	I0115 11:37:41.594308  212092 main.go:141] libmachine: (addons-431563)     <pae/>
	I0115 11:37:41.594317  212092 main.go:141] libmachine: (addons-431563)     
	I0115 11:37:41.594326  212092 main.go:141] libmachine: (addons-431563)   </features>
	I0115 11:37:41.594339  212092 main.go:141] libmachine: (addons-431563)   <cpu mode='host-passthrough'>
	I0115 11:37:41.594362  212092 main.go:141] libmachine: (addons-431563)   
	I0115 11:37:41.594377  212092 main.go:141] libmachine: (addons-431563)   </cpu>
	I0115 11:37:41.594384  212092 main.go:141] libmachine: (addons-431563)   <os>
	I0115 11:37:41.594394  212092 main.go:141] libmachine: (addons-431563)     <type>hvm</type>
	I0115 11:37:41.594413  212092 main.go:141] libmachine: (addons-431563)     <boot dev='cdrom'/>
	I0115 11:37:41.594432  212092 main.go:141] libmachine: (addons-431563)     <boot dev='hd'/>
	I0115 11:37:41.594446  212092 main.go:141] libmachine: (addons-431563)     <bootmenu enable='no'/>
	I0115 11:37:41.594464  212092 main.go:141] libmachine: (addons-431563)   </os>
	I0115 11:37:41.594474  212092 main.go:141] libmachine: (addons-431563)   <devices>
	I0115 11:37:41.594487  212092 main.go:141] libmachine: (addons-431563)     <disk type='file' device='cdrom'>
	I0115 11:37:41.594502  212092 main.go:141] libmachine: (addons-431563)       <source file='/home/jenkins/minikube-integration/17957-203994/.minikube/machines/addons-431563/boot2docker.iso'/>
	I0115 11:37:41.594519  212092 main.go:141] libmachine: (addons-431563)       <target dev='hdc' bus='scsi'/>
	I0115 11:37:41.594533  212092 main.go:141] libmachine: (addons-431563)       <readonly/>
	I0115 11:37:41.594544  212092 main.go:141] libmachine: (addons-431563)     </disk>
	I0115 11:37:41.594558  212092 main.go:141] libmachine: (addons-431563)     <disk type='file' device='disk'>
	I0115 11:37:41.594572  212092 main.go:141] libmachine: (addons-431563)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0115 11:37:41.594595  212092 main.go:141] libmachine: (addons-431563)       <source file='/home/jenkins/minikube-integration/17957-203994/.minikube/machines/addons-431563/addons-431563.rawdisk'/>
	I0115 11:37:41.594614  212092 main.go:141] libmachine: (addons-431563)       <target dev='hda' bus='virtio'/>
	I0115 11:37:41.594628  212092 main.go:141] libmachine: (addons-431563)     </disk>
	I0115 11:37:41.594641  212092 main.go:141] libmachine: (addons-431563)     <interface type='network'>
	I0115 11:37:41.594655  212092 main.go:141] libmachine: (addons-431563)       <source network='mk-addons-431563'/>
	I0115 11:37:41.594667  212092 main.go:141] libmachine: (addons-431563)       <model type='virtio'/>
	I0115 11:37:41.594678  212092 main.go:141] libmachine: (addons-431563)     </interface>
	I0115 11:37:41.594716  212092 main.go:141] libmachine: (addons-431563)     <interface type='network'>
	I0115 11:37:41.594733  212092 main.go:141] libmachine: (addons-431563)       <source network='default'/>
	I0115 11:37:41.594745  212092 main.go:141] libmachine: (addons-431563)       <model type='virtio'/>
	I0115 11:37:41.594758  212092 main.go:141] libmachine: (addons-431563)     </interface>
	I0115 11:37:41.594778  212092 main.go:141] libmachine: (addons-431563)     <serial type='pty'>
	I0115 11:37:41.594792  212092 main.go:141] libmachine: (addons-431563)       <target port='0'/>
	I0115 11:37:41.594804  212092 main.go:141] libmachine: (addons-431563)     </serial>
	I0115 11:37:41.594817  212092 main.go:141] libmachine: (addons-431563)     <console type='pty'>
	I0115 11:37:41.594830  212092 main.go:141] libmachine: (addons-431563)       <target type='serial' port='0'/>
	I0115 11:37:41.594876  212092 main.go:141] libmachine: (addons-431563)     </console>
	I0115 11:37:41.594898  212092 main.go:141] libmachine: (addons-431563)     <rng model='virtio'>
	I0115 11:37:41.594910  212092 main.go:141] libmachine: (addons-431563)       <backend model='random'>/dev/random</backend>
	I0115 11:37:41.594916  212092 main.go:141] libmachine: (addons-431563)     </rng>
	I0115 11:37:41.594922  212092 main.go:141] libmachine: (addons-431563)     
	I0115 11:37:41.594930  212092 main.go:141] libmachine: (addons-431563)     
	I0115 11:37:41.594936  212092 main.go:141] libmachine: (addons-431563)   </devices>
	I0115 11:37:41.594943  212092 main.go:141] libmachine: (addons-431563) </domain>
	I0115 11:37:41.594951  212092 main.go:141] libmachine: (addons-431563) 
	I0115 11:37:41.599054  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:4b:69:5a in network default
	I0115 11:37:41.599678  212092 main.go:141] libmachine: (addons-431563) Ensuring networks are active...
	I0115 11:37:41.599702  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:37:41.600357  212092 main.go:141] libmachine: (addons-431563) Ensuring network default is active
	I0115 11:37:41.600636  212092 main.go:141] libmachine: (addons-431563) Ensuring network mk-addons-431563 is active
	I0115 11:37:41.601068  212092 main.go:141] libmachine: (addons-431563) Getting domain xml...
	I0115 11:37:41.601746  212092 main.go:141] libmachine: (addons-431563) Creating domain...
	I0115 11:37:42.784269  212092 main.go:141] libmachine: (addons-431563) Waiting to get IP...
	I0115 11:37:42.784938  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:37:42.785381  212092 main.go:141] libmachine: (addons-431563) DBG | unable to find current IP address of domain addons-431563 in network mk-addons-431563
	I0115 11:37:42.785421  212092 main.go:141] libmachine: (addons-431563) DBG | I0115 11:37:42.785356  212114 retry.go:31] will retry after 299.763588ms: waiting for machine to come up
	I0115 11:37:43.086975  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:37:43.087370  212092 main.go:141] libmachine: (addons-431563) DBG | unable to find current IP address of domain addons-431563 in network mk-addons-431563
	I0115 11:37:43.087408  212092 main.go:141] libmachine: (addons-431563) DBG | I0115 11:37:43.087322  212114 retry.go:31] will retry after 271.694044ms: waiting for machine to come up
	I0115 11:37:43.360820  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:37:43.361332  212092 main.go:141] libmachine: (addons-431563) DBG | unable to find current IP address of domain addons-431563 in network mk-addons-431563
	I0115 11:37:43.361371  212092 main.go:141] libmachine: (addons-431563) DBG | I0115 11:37:43.361289  212114 retry.go:31] will retry after 366.289908ms: waiting for machine to come up
	I0115 11:37:43.728943  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:37:43.729458  212092 main.go:141] libmachine: (addons-431563) DBG | unable to find current IP address of domain addons-431563 in network mk-addons-431563
	I0115 11:37:43.729488  212092 main.go:141] libmachine: (addons-431563) DBG | I0115 11:37:43.729384  212114 retry.go:31] will retry after 411.171043ms: waiting for machine to come up
	I0115 11:37:44.142042  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:37:44.142404  212092 main.go:141] libmachine: (addons-431563) DBG | unable to find current IP address of domain addons-431563 in network mk-addons-431563
	I0115 11:37:44.142430  212092 main.go:141] libmachine: (addons-431563) DBG | I0115 11:37:44.142354  212114 retry.go:31] will retry after 696.15335ms: waiting for machine to come up
	I0115 11:37:44.840300  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:37:44.840672  212092 main.go:141] libmachine: (addons-431563) DBG | unable to find current IP address of domain addons-431563 in network mk-addons-431563
	I0115 11:37:44.840701  212092 main.go:141] libmachine: (addons-431563) DBG | I0115 11:37:44.840633  212114 retry.go:31] will retry after 831.270206ms: waiting for machine to come up
	I0115 11:37:45.673748  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:37:45.675718  212092 main.go:141] libmachine: (addons-431563) DBG | unable to find current IP address of domain addons-431563 in network mk-addons-431563
	I0115 11:37:45.675745  212092 main.go:141] libmachine: (addons-431563) DBG | I0115 11:37:45.674083  212114 retry.go:31] will retry after 790.67111ms: waiting for machine to come up
	I0115 11:37:46.466460  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:37:46.466975  212092 main.go:141] libmachine: (addons-431563) DBG | unable to find current IP address of domain addons-431563 in network mk-addons-431563
	I0115 11:37:46.467002  212092 main.go:141] libmachine: (addons-431563) DBG | I0115 11:37:46.466925  212114 retry.go:31] will retry after 1.489786666s: waiting for machine to come up
	I0115 11:37:47.958798  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:37:47.959255  212092 main.go:141] libmachine: (addons-431563) DBG | unable to find current IP address of domain addons-431563 in network mk-addons-431563
	I0115 11:37:47.959289  212092 main.go:141] libmachine: (addons-431563) DBG | I0115 11:37:47.959179  212114 retry.go:31] will retry after 1.258373908s: waiting for machine to come up
	I0115 11:37:49.219808  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:37:49.220224  212092 main.go:141] libmachine: (addons-431563) DBG | unable to find current IP address of domain addons-431563 in network mk-addons-431563
	I0115 11:37:49.220251  212092 main.go:141] libmachine: (addons-431563) DBG | I0115 11:37:49.220171  212114 retry.go:31] will retry after 1.86576184s: waiting for machine to come up
	I0115 11:37:51.087122  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:37:51.087571  212092 main.go:141] libmachine: (addons-431563) DBG | unable to find current IP address of domain addons-431563 in network mk-addons-431563
	I0115 11:37:51.087625  212092 main.go:141] libmachine: (addons-431563) DBG | I0115 11:37:51.087515  212114 retry.go:31] will retry after 2.793643295s: waiting for machine to come up
	I0115 11:37:53.884551  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:37:53.884911  212092 main.go:141] libmachine: (addons-431563) DBG | unable to find current IP address of domain addons-431563 in network mk-addons-431563
	I0115 11:37:53.884947  212092 main.go:141] libmachine: (addons-431563) DBG | I0115 11:37:53.884859  212114 retry.go:31] will retry after 2.332467792s: waiting for machine to come up
	I0115 11:37:56.218592  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:37:56.218964  212092 main.go:141] libmachine: (addons-431563) DBG | unable to find current IP address of domain addons-431563 in network mk-addons-431563
	I0115 11:37:56.218994  212092 main.go:141] libmachine: (addons-431563) DBG | I0115 11:37:56.218905  212114 retry.go:31] will retry after 3.43707879s: waiting for machine to come up
	I0115 11:37:59.660555  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:37:59.660943  212092 main.go:141] libmachine: (addons-431563) DBG | unable to find current IP address of domain addons-431563 in network mk-addons-431563
	I0115 11:37:59.660981  212092 main.go:141] libmachine: (addons-431563) DBG | I0115 11:37:59.660895  212114 retry.go:31] will retry after 5.642571642s: waiting for machine to come up
	I0115 11:38:05.308686  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:05.309177  212092 main.go:141] libmachine: (addons-431563) Found IP for machine: 192.168.39.212
	I0115 11:38:05.309199  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has current primary IP address 192.168.39.212 and MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:05.309205  212092 main.go:141] libmachine: (addons-431563) Reserving static IP address...
	I0115 11:38:05.309590  212092 main.go:141] libmachine: (addons-431563) DBG | unable to find host DHCP lease matching {name: "addons-431563", mac: "52:54:00:0f:6a:05", ip: "192.168.39.212"} in network mk-addons-431563
	I0115 11:38:05.382773  212092 main.go:141] libmachine: (addons-431563) DBG | Getting to WaitForSSH function...
	I0115 11:38:05.382804  212092 main.go:141] libmachine: (addons-431563) Reserved static IP address: 192.168.39.212
	I0115 11:38:05.382815  212092 main.go:141] libmachine: (addons-431563) Waiting for SSH to be available...
	I0115 11:38:05.385512  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:05.385908  212092 main.go:141] libmachine: (addons-431563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6a:05", ip: ""} in network mk-addons-431563: {Iface:virbr1 ExpiryTime:2024-01-15 12:37:57 +0000 UTC Type:0 Mac:52:54:00:0f:6a:05 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0f:6a:05}
	I0115 11:38:05.385944  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined IP address 192.168.39.212 and MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:05.386056  212092 main.go:141] libmachine: (addons-431563) DBG | Using SSH client type: external
	I0115 11:38:05.386107  212092 main.go:141] libmachine: (addons-431563) DBG | Using SSH private key: /home/jenkins/minikube-integration/17957-203994/.minikube/machines/addons-431563/id_rsa (-rw-------)
	I0115 11:38:05.386152  212092 main.go:141] libmachine: (addons-431563) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.212 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17957-203994/.minikube/machines/addons-431563/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0115 11:38:05.386168  212092 main.go:141] libmachine: (addons-431563) DBG | About to run SSH command:
	I0115 11:38:05.386183  212092 main.go:141] libmachine: (addons-431563) DBG | exit 0
	I0115 11:38:05.475710  212092 main.go:141] libmachine: (addons-431563) DBG | SSH cmd err, output: <nil>: 
	I0115 11:38:05.475933  212092 main.go:141] libmachine: (addons-431563) KVM machine creation complete!
	I0115 11:38:05.476256  212092 main.go:141] libmachine: (addons-431563) Calling .GetConfigRaw
	I0115 11:38:05.476931  212092 main.go:141] libmachine: (addons-431563) Calling .DriverName
	I0115 11:38:05.477122  212092 main.go:141] libmachine: (addons-431563) Calling .DriverName
	I0115 11:38:05.477288  212092 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0115 11:38:05.477307  212092 main.go:141] libmachine: (addons-431563) Calling .GetState
	I0115 11:38:05.478469  212092 main.go:141] libmachine: Detecting operating system of created instance...
	I0115 11:38:05.478484  212092 main.go:141] libmachine: Waiting for SSH to be available...
	I0115 11:38:05.478491  212092 main.go:141] libmachine: Getting to WaitForSSH function...
	I0115 11:38:05.478498  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHHostname
	I0115 11:38:05.480493  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:05.481369  212092 main.go:141] libmachine: (addons-431563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6a:05", ip: ""} in network mk-addons-431563: {Iface:virbr1 ExpiryTime:2024-01-15 12:37:57 +0000 UTC Type:0 Mac:52:54:00:0f:6a:05 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-431563 Clientid:01:52:54:00:0f:6a:05}
	I0115 11:38:05.481426  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined IP address 192.168.39.212 and MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:05.481975  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHPort
	I0115 11:38:05.482181  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHKeyPath
	I0115 11:38:05.482335  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHKeyPath
	I0115 11:38:05.482476  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHUsername
	I0115 11:38:05.482664  212092 main.go:141] libmachine: Using SSH client type: native
	I0115 11:38:05.483064  212092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0115 11:38:05.483080  212092 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0115 11:38:05.594872  212092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 11:38:05.594902  212092 main.go:141] libmachine: Detecting the provisioner...
	I0115 11:38:05.594913  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHHostname
	I0115 11:38:05.597990  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:05.598381  212092 main.go:141] libmachine: (addons-431563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6a:05", ip: ""} in network mk-addons-431563: {Iface:virbr1 ExpiryTime:2024-01-15 12:37:57 +0000 UTC Type:0 Mac:52:54:00:0f:6a:05 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-431563 Clientid:01:52:54:00:0f:6a:05}
	I0115 11:38:05.598410  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined IP address 192.168.39.212 and MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:05.598608  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHPort
	I0115 11:38:05.598832  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHKeyPath
	I0115 11:38:05.599029  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHKeyPath
	I0115 11:38:05.599210  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHUsername
	I0115 11:38:05.599422  212092 main.go:141] libmachine: Using SSH client type: native
	I0115 11:38:05.599760  212092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0115 11:38:05.599774  212092 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0115 11:38:05.716352  212092 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0115 11:38:05.716461  212092 main.go:141] libmachine: found compatible host: buildroot
	I0115 11:38:05.716476  212092 main.go:141] libmachine: Provisioning with buildroot...
	I0115 11:38:05.716489  212092 main.go:141] libmachine: (addons-431563) Calling .GetMachineName
	I0115 11:38:05.716749  212092 buildroot.go:166] provisioning hostname "addons-431563"
	I0115 11:38:05.716776  212092 main.go:141] libmachine: (addons-431563) Calling .GetMachineName
	I0115 11:38:05.716941  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHHostname
	I0115 11:38:05.719564  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:05.719953  212092 main.go:141] libmachine: (addons-431563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6a:05", ip: ""} in network mk-addons-431563: {Iface:virbr1 ExpiryTime:2024-01-15 12:37:57 +0000 UTC Type:0 Mac:52:54:00:0f:6a:05 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-431563 Clientid:01:52:54:00:0f:6a:05}
	I0115 11:38:05.719997  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined IP address 192.168.39.212 and MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:05.720129  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHPort
	I0115 11:38:05.720351  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHKeyPath
	I0115 11:38:05.720497  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHKeyPath
	I0115 11:38:05.720616  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHUsername
	I0115 11:38:05.720780  212092 main.go:141] libmachine: Using SSH client type: native
	I0115 11:38:05.721091  212092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0115 11:38:05.721102  212092 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-431563 && echo "addons-431563" | sudo tee /etc/hostname
	I0115 11:38:05.843713  212092 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-431563
	
	I0115 11:38:05.843746  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHHostname
	I0115 11:38:05.846443  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:05.846780  212092 main.go:141] libmachine: (addons-431563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6a:05", ip: ""} in network mk-addons-431563: {Iface:virbr1 ExpiryTime:2024-01-15 12:37:57 +0000 UTC Type:0 Mac:52:54:00:0f:6a:05 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-431563 Clientid:01:52:54:00:0f:6a:05}
	I0115 11:38:05.846815  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined IP address 192.168.39.212 and MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:05.846965  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHPort
	I0115 11:38:05.847182  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHKeyPath
	I0115 11:38:05.847347  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHKeyPath
	I0115 11:38:05.847473  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHUsername
	I0115 11:38:05.847613  212092 main.go:141] libmachine: Using SSH client type: native
	I0115 11:38:05.847986  212092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0115 11:38:05.848004  212092 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-431563' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-431563/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-431563' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 11:38:05.968225  212092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 11:38:05.968263  212092 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17957-203994/.minikube CaCertPath:/home/jenkins/minikube-integration/17957-203994/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17957-203994/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17957-203994/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17957-203994/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17957-203994/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17957-203994/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17957-203994/.minikube}
	I0115 11:38:05.968299  212092 buildroot.go:174] setting up certificates
	I0115 11:38:05.968333  212092 provision.go:83] configureAuth start
	I0115 11:38:05.968347  212092 main.go:141] libmachine: (addons-431563) Calling .GetMachineName
	I0115 11:38:05.968649  212092 main.go:141] libmachine: (addons-431563) Calling .GetIP
	I0115 11:38:05.971204  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:05.971601  212092 main.go:141] libmachine: (addons-431563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6a:05", ip: ""} in network mk-addons-431563: {Iface:virbr1 ExpiryTime:2024-01-15 12:37:57 +0000 UTC Type:0 Mac:52:54:00:0f:6a:05 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-431563 Clientid:01:52:54:00:0f:6a:05}
	I0115 11:38:05.971642  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined IP address 192.168.39.212 and MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:05.971777  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHHostname
	I0115 11:38:05.973932  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:05.974252  212092 main.go:141] libmachine: (addons-431563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6a:05", ip: ""} in network mk-addons-431563: {Iface:virbr1 ExpiryTime:2024-01-15 12:37:57 +0000 UTC Type:0 Mac:52:54:00:0f:6a:05 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-431563 Clientid:01:52:54:00:0f:6a:05}
	I0115 11:38:05.974281  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined IP address 192.168.39.212 and MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:05.974368  212092 provision.go:138] copyHostCerts
	I0115 11:38:05.974437  212092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17957-203994/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17957-203994/.minikube/ca.pem (1078 bytes)
	I0115 11:38:05.974580  212092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17957-203994/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17957-203994/.minikube/cert.pem (1123 bytes)
	I0115 11:38:05.974674  212092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17957-203994/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17957-203994/.minikube/key.pem (1679 bytes)
	I0115 11:38:05.974763  212092 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17957-203994/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17957-203994/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17957-203994/.minikube/certs/ca-key.pem org=jenkins.addons-431563 san=[192.168.39.212 192.168.39.212 localhost 127.0.0.1 minikube addons-431563]
	I0115 11:38:06.050170  212092 provision.go:172] copyRemoteCerts
	I0115 11:38:06.050234  212092 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 11:38:06.050273  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHHostname
	I0115 11:38:06.053050  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:06.053472  212092 main.go:141] libmachine: (addons-431563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6a:05", ip: ""} in network mk-addons-431563: {Iface:virbr1 ExpiryTime:2024-01-15 12:37:57 +0000 UTC Type:0 Mac:52:54:00:0f:6a:05 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-431563 Clientid:01:52:54:00:0f:6a:05}
	I0115 11:38:06.053510  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined IP address 192.168.39.212 and MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:06.053675  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHPort
	I0115 11:38:06.053875  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHKeyPath
	I0115 11:38:06.054040  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHUsername
	I0115 11:38:06.054188  212092 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17957-203994/.minikube/machines/addons-431563/id_rsa Username:docker}
	I0115 11:38:06.140800  212092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-203994/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0115 11:38:06.166107  212092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-203994/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0115 11:38:06.190790  212092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-203994/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0115 11:38:06.215111  212092 provision.go:86] duration metric: configureAuth took 246.75977ms
	I0115 11:38:06.215144  212092 buildroot.go:189] setting minikube options for container-runtime
	I0115 11:38:06.215345  212092 config.go:182] Loaded profile config "addons-431563": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 11:38:06.215374  212092 main.go:141] libmachine: Checking connection to Docker...
	I0115 11:38:06.215390  212092 main.go:141] libmachine: (addons-431563) Calling .GetURL
	I0115 11:38:06.216578  212092 main.go:141] libmachine: (addons-431563) DBG | Using libvirt version 6000000
	I0115 11:38:06.218713  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:06.219093  212092 main.go:141] libmachine: (addons-431563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6a:05", ip: ""} in network mk-addons-431563: {Iface:virbr1 ExpiryTime:2024-01-15 12:37:57 +0000 UTC Type:0 Mac:52:54:00:0f:6a:05 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-431563 Clientid:01:52:54:00:0f:6a:05}
	I0115 11:38:06.219130  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined IP address 192.168.39.212 and MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:06.219319  212092 main.go:141] libmachine: Docker is up and running!
	I0115 11:38:06.219337  212092 main.go:141] libmachine: Reticulating splines...
	I0115 11:38:06.219347  212092 client.go:171] LocalClient.Create took 25.461267126s
	I0115 11:38:06.219373  212092 start.go:167] duration metric: libmachine.API.Create for "addons-431563" took 25.46133467s
	I0115 11:38:06.219386  212092 start.go:300] post-start starting for "addons-431563" (driver="kvm2")
	I0115 11:38:06.219413  212092 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 11:38:06.219440  212092 main.go:141] libmachine: (addons-431563) Calling .DriverName
	I0115 11:38:06.219730  212092 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 11:38:06.219757  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHHostname
	I0115 11:38:06.221789  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:06.222177  212092 main.go:141] libmachine: (addons-431563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6a:05", ip: ""} in network mk-addons-431563: {Iface:virbr1 ExpiryTime:2024-01-15 12:37:57 +0000 UTC Type:0 Mac:52:54:00:0f:6a:05 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-431563 Clientid:01:52:54:00:0f:6a:05}
	I0115 11:38:06.222205  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined IP address 192.168.39.212 and MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:06.222370  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHPort
	I0115 11:38:06.222552  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHKeyPath
	I0115 11:38:06.222680  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHUsername
	I0115 11:38:06.222786  212092 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17957-203994/.minikube/machines/addons-431563/id_rsa Username:docker}
	I0115 11:38:06.308937  212092 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 11:38:06.313242  212092 info.go:137] Remote host: Buildroot 2021.02.12
	I0115 11:38:06.313273  212092 filesync.go:126] Scanning /home/jenkins/minikube-integration/17957-203994/.minikube/addons for local assets ...
	I0115 11:38:06.313350  212092 filesync.go:126] Scanning /home/jenkins/minikube-integration/17957-203994/.minikube/files for local assets ...
	I0115 11:38:06.313383  212092 start.go:303] post-start completed in 93.986123ms
	I0115 11:38:06.313426  212092 main.go:141] libmachine: (addons-431563) Calling .GetConfigRaw
	I0115 11:38:06.313957  212092 main.go:141] libmachine: (addons-431563) Calling .GetIP
	I0115 11:38:06.316621  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:06.317004  212092 main.go:141] libmachine: (addons-431563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6a:05", ip: ""} in network mk-addons-431563: {Iface:virbr1 ExpiryTime:2024-01-15 12:37:57 +0000 UTC Type:0 Mac:52:54:00:0f:6a:05 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-431563 Clientid:01:52:54:00:0f:6a:05}
	I0115 11:38:06.317055  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined IP address 192.168.39.212 and MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:06.317265  212092 profile.go:148] Saving config to /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/config.json ...
	I0115 11:38:06.317478  212092 start.go:128] duration metric: createHost completed in 25.577133902s
	I0115 11:38:06.317509  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHHostname
	I0115 11:38:06.319738  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:06.320060  212092 main.go:141] libmachine: (addons-431563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6a:05", ip: ""} in network mk-addons-431563: {Iface:virbr1 ExpiryTime:2024-01-15 12:37:57 +0000 UTC Type:0 Mac:52:54:00:0f:6a:05 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-431563 Clientid:01:52:54:00:0f:6a:05}
	I0115 11:38:06.320089  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined IP address 192.168.39.212 and MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:06.320176  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHPort
	I0115 11:38:06.320359  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHKeyPath
	I0115 11:38:06.320538  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHKeyPath
	I0115 11:38:06.320676  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHUsername
	I0115 11:38:06.320847  212092 main.go:141] libmachine: Using SSH client type: native
	I0115 11:38:06.321186  212092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0115 11:38:06.321198  212092 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0115 11:38:06.436330  212092 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705318686.406040139
	
	I0115 11:38:06.436356  212092 fix.go:206] guest clock: 1705318686.406040139
	I0115 11:38:06.436363  212092 fix.go:219] Guest: 2024-01-15 11:38:06.406040139 +0000 UTC Remote: 2024-01-15 11:38:06.317493822 +0000 UTC m=+25.693543600 (delta=88.546317ms)
	I0115 11:38:06.436396  212092 fix.go:190] guest clock delta is within tolerance: 88.546317ms
	I0115 11:38:06.436403  212092 start.go:83] releasing machines lock for "addons-431563", held for 25.696141306s
	I0115 11:38:06.436432  212092 main.go:141] libmachine: (addons-431563) Calling .DriverName
	I0115 11:38:06.436699  212092 main.go:141] libmachine: (addons-431563) Calling .GetIP
	I0115 11:38:06.439102  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:06.439477  212092 main.go:141] libmachine: (addons-431563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6a:05", ip: ""} in network mk-addons-431563: {Iface:virbr1 ExpiryTime:2024-01-15 12:37:57 +0000 UTC Type:0 Mac:52:54:00:0f:6a:05 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-431563 Clientid:01:52:54:00:0f:6a:05}
	I0115 11:38:06.439499  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined IP address 192.168.39.212 and MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:06.439616  212092 main.go:141] libmachine: (addons-431563) Calling .DriverName
	I0115 11:38:06.440125  212092 main.go:141] libmachine: (addons-431563) Calling .DriverName
	I0115 11:38:06.440303  212092 main.go:141] libmachine: (addons-431563) Calling .DriverName
	I0115 11:38:06.440428  212092 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 11:38:06.440490  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHHostname
	I0115 11:38:06.440545  212092 ssh_runner.go:195] Run: cat /version.json
	I0115 11:38:06.440572  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHHostname
	I0115 11:38:06.443156  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:06.443358  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:06.443520  212092 main.go:141] libmachine: (addons-431563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6a:05", ip: ""} in network mk-addons-431563: {Iface:virbr1 ExpiryTime:2024-01-15 12:37:57 +0000 UTC Type:0 Mac:52:54:00:0f:6a:05 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-431563 Clientid:01:52:54:00:0f:6a:05}
	I0115 11:38:06.443550  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined IP address 192.168.39.212 and MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:06.443661  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHPort
	I0115 11:38:06.443816  212092 main.go:141] libmachine: (addons-431563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6a:05", ip: ""} in network mk-addons-431563: {Iface:virbr1 ExpiryTime:2024-01-15 12:37:57 +0000 UTC Type:0 Mac:52:54:00:0f:6a:05 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-431563 Clientid:01:52:54:00:0f:6a:05}
	I0115 11:38:06.443836  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined IP address 192.168.39.212 and MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:06.443844  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHKeyPath
	I0115 11:38:06.444020  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHUsername
	I0115 11:38:06.444035  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHPort
	I0115 11:38:06.444160  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHKeyPath
	I0115 11:38:06.444166  212092 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17957-203994/.minikube/machines/addons-431563/id_rsa Username:docker}
	I0115 11:38:06.444275  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHUsername
	I0115 11:38:06.444430  212092 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17957-203994/.minikube/machines/addons-431563/id_rsa Username:docker}
	I0115 11:38:06.533890  212092 ssh_runner.go:195] Run: systemctl --version
	I0115 11:38:06.558460  212092 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0115 11:38:06.564150  212092 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0115 11:38:06.564224  212092 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 11:38:06.578364  212092 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0115 11:38:06.578386  212092 start.go:475] detecting cgroup driver to use...
	I0115 11:38:06.578452  212092 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0115 11:38:06.610560  212092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0115 11:38:06.623398  212092 docker.go:217] disabling cri-docker service (if available) ...
	I0115 11:38:06.623467  212092 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 11:38:06.636251  212092 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 11:38:06.648859  212092 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 11:38:06.755140  212092 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 11:38:06.871018  212092 docker.go:233] disabling docker service ...
	I0115 11:38:06.871122  212092 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 11:38:06.884979  212092 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 11:38:06.896548  212092 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 11:38:07.001194  212092 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 11:38:07.098757  212092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 11:38:07.112320  212092 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 11:38:07.128886  212092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0115 11:38:07.138780  212092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0115 11:38:07.148487  212092 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0115 11:38:07.148542  212092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0115 11:38:07.158108  212092 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0115 11:38:07.167891  212092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0115 11:38:07.177889  212092 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0115 11:38:07.187874  212092 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 11:38:07.198107  212092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0115 11:38:07.208240  212092 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 11:38:07.217012  212092 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0115 11:38:07.217102  212092 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0115 11:38:07.230661  212092 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 11:38:07.239406  212092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 11:38:07.345248  212092 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0115 11:38:07.375113  212092 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0115 11:38:07.375212  212092 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0115 11:38:07.380356  212092 retry.go:31] will retry after 912.164321ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0115 11:38:08.293515  212092 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0115 11:38:08.299181  212092 start.go:543] Will wait 60s for crictl version
	I0115 11:38:08.299267  212092 ssh_runner.go:195] Run: which crictl
	I0115 11:38:08.303153  212092 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 11:38:08.347263  212092 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.11
	RuntimeApiVersion:  v1
	I0115 11:38:08.347386  212092 ssh_runner.go:195] Run: containerd --version
	I0115 11:38:08.384653  212092 ssh_runner.go:195] Run: containerd --version
	I0115 11:38:08.418607  212092 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.7.11 ...
	I0115 11:38:08.420143  212092 main.go:141] libmachine: (addons-431563) Calling .GetIP
	I0115 11:38:08.422856  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:08.423173  212092 main.go:141] libmachine: (addons-431563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6a:05", ip: ""} in network mk-addons-431563: {Iface:virbr1 ExpiryTime:2024-01-15 12:37:57 +0000 UTC Type:0 Mac:52:54:00:0f:6a:05 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-431563 Clientid:01:52:54:00:0f:6a:05}
	I0115 11:38:08.423206  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined IP address 192.168.39.212 and MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:08.423367  212092 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0115 11:38:08.427313  212092 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 11:38:08.439744  212092 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0115 11:38:08.439822  212092 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 11:38:08.477907  212092 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0115 11:38:08.477988  212092 ssh_runner.go:195] Run: which lz4
	I0115 11:38:08.481957  212092 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0115 11:38:08.486085  212092 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0115 11:38:08.486118  212092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-203994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (457457495 bytes)
	I0115 11:38:10.233571  212092 containerd.go:548] Took 1.751638 seconds to copy over tarball
	I0115 11:38:10.233645  212092 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0115 11:38:13.229619  212092 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.995943903s)
	I0115 11:38:13.229650  212092 containerd.go:555] Took 2.996052 seconds to extract the tarball
	I0115 11:38:13.229664  212092 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0115 11:38:13.271044  212092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 11:38:13.376480  212092 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0115 11:38:13.400140  212092 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 11:38:13.445523  212092 retry.go:31] will retry after 275.970778ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-01-15T11:38:13Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0115 11:38:13.722078  212092 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 11:38:13.759109  212092 containerd.go:612] all images are preloaded for containerd runtime.
	I0115 11:38:13.759139  212092 cache_images.go:84] Images are preloaded, skipping loading
	I0115 11:38:13.759197  212092 ssh_runner.go:195] Run: sudo crictl info
	I0115 11:38:13.792996  212092 cni.go:84] Creating CNI manager for ""
	I0115 11:38:13.793019  212092 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0115 11:38:13.793043  212092 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 11:38:13.793085  212092 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.212 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-431563 NodeName:addons-431563 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.212"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.212 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0115 11:38:13.793244  212092 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.212
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-431563"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.212
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.212"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 11:38:13.793346  212092 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-431563 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.212
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-431563 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0115 11:38:13.793421  212092 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0115 11:38:13.802217  212092 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 11:38:13.802276  212092 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0115 11:38:13.810391  212092 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I0115 11:38:13.826082  212092 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0115 11:38:13.842046  212092 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2108 bytes)
	I0115 11:38:13.857485  212092 ssh_runner.go:195] Run: grep 192.168.39.212	control-plane.minikube.internal$ /etc/hosts
	I0115 11:38:13.860996  212092 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.212	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 11:38:13.872799  212092 certs.go:56] Setting up /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563 for IP: 192.168.39.212
	I0115 11:38:13.872826  212092 certs.go:190] acquiring lock for shared ca certs: {Name:mkf30ef04cd7011e83b1039c324cfdb3d7951b1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:38:13.872954  212092 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17957-203994/.minikube/ca.key
	I0115 11:38:14.097379  212092 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17957-203994/.minikube/ca.crt ...
	I0115 11:38:14.097412  212092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-203994/.minikube/ca.crt: {Name:mk3d758cd6571820d5d2f6b6b2c44409c40b4588 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:38:14.097586  212092 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17957-203994/.minikube/ca.key ...
	I0115 11:38:14.097600  212092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-203994/.minikube/ca.key: {Name:mk9b04385af7494e163828cf7eaab285a3dc32ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:38:14.097684  212092 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17957-203994/.minikube/proxy-client-ca.key
	I0115 11:38:14.169614  212092 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17957-203994/.minikube/proxy-client-ca.crt ...
	I0115 11:38:14.169649  212092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-203994/.minikube/proxy-client-ca.crt: {Name:mk630492beee2656436de1358abbfbf3b09a2bae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:38:14.169805  212092 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17957-203994/.minikube/proxy-client-ca.key ...
	I0115 11:38:14.169816  212092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-203994/.minikube/proxy-client-ca.key: {Name:mkf9e2dd3fba08ddb9c62431b0db697b9df4aa91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:38:14.169918  212092 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/client.key
	I0115 11:38:14.169930  212092 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/client.crt with IP's: []
	I0115 11:38:14.392182  212092 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/client.crt ...
	I0115 11:38:14.392223  212092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/client.crt: {Name:mke3dc00df587d01e36b8852d7a9bd297aed0e67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:38:14.392396  212092 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/client.key ...
	I0115 11:38:14.392407  212092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/client.key: {Name:mk459d47fc213417ce05749b5e7f783d74c39d1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:38:14.392471  212092 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/apiserver.key.543da273
	I0115 11:38:14.392489  212092 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/apiserver.crt.543da273 with IP's: [192.168.39.212 10.96.0.1 127.0.0.1 10.0.0.1]
	I0115 11:38:14.672894  212092 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/apiserver.crt.543da273 ...
	I0115 11:38:14.672927  212092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/apiserver.crt.543da273: {Name:mkce9436af0232a5c1628ee27f3ca591b33d606d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:38:14.673083  212092 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/apiserver.key.543da273 ...
	I0115 11:38:14.673097  212092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/apiserver.key.543da273: {Name:mkd599fd9f59075ae9ca4cf5b9db00ca0bfa9a64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:38:14.673162  212092 certs.go:337] copying /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/apiserver.crt.543da273 -> /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/apiserver.crt
	I0115 11:38:14.673263  212092 certs.go:341] copying /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/apiserver.key.543da273 -> /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/apiserver.key
	I0115 11:38:14.673307  212092 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/proxy-client.key
	I0115 11:38:14.673323  212092 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/proxy-client.crt with IP's: []
	I0115 11:38:14.827872  212092 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/proxy-client.crt ...
	I0115 11:38:14.827906  212092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/proxy-client.crt: {Name:mk1e8019193c51b96f3280b846489635ad7ea933 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:38:14.828068  212092 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/proxy-client.key ...
	I0115 11:38:14.828081  212092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/proxy-client.key: {Name:mk28400f8c2fbb9443a0b11236aac39bad5378f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:38:14.828244  212092 certs.go:437] found cert: /home/jenkins/minikube-integration/17957-203994/.minikube/certs/home/jenkins/minikube-integration/17957-203994/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 11:38:14.828280  212092 certs.go:437] found cert: /home/jenkins/minikube-integration/17957-203994/.minikube/certs/home/jenkins/minikube-integration/17957-203994/.minikube/certs/ca.pem (1078 bytes)
	I0115 11:38:14.828304  212092 certs.go:437] found cert: /home/jenkins/minikube-integration/17957-203994/.minikube/certs/home/jenkins/minikube-integration/17957-203994/.minikube/certs/cert.pem (1123 bytes)
	I0115 11:38:14.828328  212092 certs.go:437] found cert: /home/jenkins/minikube-integration/17957-203994/.minikube/certs/home/jenkins/minikube-integration/17957-203994/.minikube/certs/key.pem (1679 bytes)
	I0115 11:38:14.828991  212092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0115 11:38:14.852764  212092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0115 11:38:14.875256  212092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 11:38:14.896975  212092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0115 11:38:14.918784  212092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-203994/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 11:38:14.940337  212092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-203994/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 11:38:14.961720  212092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-203994/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 11:38:14.984673  212092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-203994/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0115 11:38:15.007012  212092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-203994/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 11:38:15.029172  212092 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0115 11:38:15.044515  212092 ssh_runner.go:195] Run: openssl version
	I0115 11:38:15.049680  212092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 11:38:15.058886  212092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 11:38:15.063100  212092 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 11:38 /usr/share/ca-certificates/minikubeCA.pem
	I0115 11:38:15.063169  212092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 11:38:15.068256  212092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 11:38:15.077036  212092 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 11:38:15.080933  212092 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0115 11:38:15.080978  212092 kubeadm.go:404] StartCluster: {Name:addons-431563 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:addons-431563 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 11:38:15.081051  212092 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0115 11:38:15.081113  212092 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 11:38:15.117859  212092 cri.go:89] found id: ""
	I0115 11:38:15.117935  212092 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0115 11:38:15.126741  212092 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 11:38:15.135047  212092 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 11:38:15.143223  212092 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 11:38:15.143266  212092 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0115 11:38:15.190879  212092 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0115 11:38:15.190994  212092 kubeadm.go:322] [preflight] Running pre-flight checks
	I0115 11:38:15.330514  212092 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0115 11:38:15.330649  212092 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0115 11:38:15.330743  212092 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0115 11:38:15.540182  212092 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0115 11:38:15.542690  212092 out.go:204]   - Generating certificates and keys ...
	I0115 11:38:15.542798  212092 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0115 11:38:15.542874  212092 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0115 11:38:15.610885  212092 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0115 11:38:15.820315  212092 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0115 11:38:15.934252  212092 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0115 11:38:16.278029  212092 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0115 11:38:16.427620  212092 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0115 11:38:16.427856  212092 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-431563 localhost] and IPs [192.168.39.212 127.0.0.1 ::1]
	I0115 11:38:16.515423  212092 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0115 11:38:16.515597  212092 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-431563 localhost] and IPs [192.168.39.212 127.0.0.1 ::1]
	I0115 11:38:16.646484  212092 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0115 11:38:16.837715  212092 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0115 11:38:16.915896  212092 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0115 11:38:16.916061  212092 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0115 11:38:17.057521  212092 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0115 11:38:17.505516  212092 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0115 11:38:17.620703  212092 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0115 11:38:17.925145  212092 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0115 11:38:17.925868  212092 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0115 11:38:17.928169  212092 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0115 11:38:17.930072  212092 out.go:204]   - Booting up control plane ...
	I0115 11:38:17.930169  212092 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0115 11:38:17.930288  212092 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0115 11:38:17.930384  212092 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0115 11:38:17.947271  212092 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0115 11:38:17.947677  212092 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0115 11:38:17.947995  212092 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0115 11:38:18.060180  212092 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0115 11:38:25.559297  212092 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.503898 seconds
	I0115 11:38:25.559492  212092 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0115 11:38:25.577289  212092 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0115 11:38:26.112705  212092 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0115 11:38:26.112945  212092 kubeadm.go:322] [mark-control-plane] Marking the node addons-431563 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0115 11:38:26.626325  212092 kubeadm.go:322] [bootstrap-token] Using token: 5xspg8.i43ouy6d05jsbbgp
	I0115 11:38:26.627795  212092 out.go:204]   - Configuring RBAC rules ...
	I0115 11:38:26.627937  212092 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0115 11:38:26.642405  212092 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0115 11:38:26.650231  212092 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0115 11:38:26.653865  212092 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0115 11:38:26.657384  212092 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0115 11:38:26.661016  212092 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0115 11:38:26.676246  212092 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0115 11:38:26.892447  212092 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0115 11:38:27.050221  212092 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0115 11:38:27.051299  212092 kubeadm.go:322] 
	I0115 11:38:27.051379  212092 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0115 11:38:27.051389  212092 kubeadm.go:322] 
	I0115 11:38:27.051485  212092 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0115 11:38:27.051494  212092 kubeadm.go:322] 
	I0115 11:38:27.051528  212092 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0115 11:38:27.051611  212092 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0115 11:38:27.051727  212092 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0115 11:38:27.051762  212092 kubeadm.go:322] 
	I0115 11:38:27.051862  212092 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0115 11:38:27.051879  212092 kubeadm.go:322] 
	I0115 11:38:27.051955  212092 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0115 11:38:27.051967  212092 kubeadm.go:322] 
	I0115 11:38:27.052057  212092 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0115 11:38:27.052161  212092 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0115 11:38:27.052252  212092 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0115 11:38:27.052267  212092 kubeadm.go:322] 
	I0115 11:38:27.052385  212092 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0115 11:38:27.052454  212092 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0115 11:38:27.052468  212092 kubeadm.go:322] 
	I0115 11:38:27.052575  212092 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 5xspg8.i43ouy6d05jsbbgp \
	I0115 11:38:27.052716  212092 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:116fcb73be0c8e7c6f98e3551b370eb1155cdf21ec286f3007ec62ce5ed9be00 \
	I0115 11:38:27.052747  212092 kubeadm.go:322] 	--control-plane 
	I0115 11:38:27.052757  212092 kubeadm.go:322] 
	I0115 11:38:27.052885  212092 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0115 11:38:27.052900  212092 kubeadm.go:322] 
	I0115 11:38:27.053020  212092 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 5xspg8.i43ouy6d05jsbbgp \
	I0115 11:38:27.053153  212092 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:116fcb73be0c8e7c6f98e3551b370eb1155cdf21ec286f3007ec62ce5ed9be00 
	I0115 11:38:27.055277  212092 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0115 11:38:27.055314  212092 cni.go:84] Creating CNI manager for ""
	I0115 11:38:27.055325  212092 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0115 11:38:27.057013  212092 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0115 11:38:27.058473  212092 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0115 11:38:27.068884  212092 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0115 11:38:27.106479  212092 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0115 11:38:27.106568  212092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:38:27.106577  212092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=71cf7d00913f789829bf5813c1d11b9a83eda53e minikube.k8s.io/name=addons-431563 minikube.k8s.io/updated_at=2024_01_15T11_38_27_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:38:27.171094  212092 ops.go:34] apiserver oom_adj: -16
	I0115 11:38:27.355688  212092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:38:27.855931  212092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:38:28.355777  212092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:38:28.856427  212092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:38:29.356632  212092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:38:29.856632  212092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:38:30.356066  212092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:38:30.856298  212092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:38:31.356475  212092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:38:31.855789  212092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:38:32.356643  212092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:38:32.856401  212092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:38:33.355818  212092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:38:33.855810  212092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:38:34.355806  212092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:38:34.856049  212092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:38:35.356287  212092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:38:35.856315  212092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:38:36.355712  212092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:38:36.855873  212092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:38:37.356686  212092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:38:37.856335  212092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:38:38.355777  212092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:38:38.856550  212092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:38:39.356002  212092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:38:39.856630  212092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:38:40.009363  212092 kubeadm.go:1088] duration metric: took 12.902868357s to wait for elevateKubeSystemPrivileges.
	I0115 11:38:40.009423  212092 kubeadm.go:406] StartCluster complete in 24.928448987s
	I0115 11:38:40.009451  212092 settings.go:142] acquiring lock: {Name:mk601c7d607fd0789f4890df12cfdd2f336141fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:38:40.009579  212092 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17957-203994/kubeconfig
	I0115 11:38:40.010053  212092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-203994/kubeconfig: {Name:mk95e2a6ee39818b57381eaa3a79f0366c7f065d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:38:40.010274  212092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0115 11:38:40.010290  212092 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0115 11:38:40.010396  212092 addons.go:69] Setting gcp-auth=true in profile "addons-431563"
	I0115 11:38:40.010414  212092 addons.go:69] Setting yakd=true in profile "addons-431563"
	I0115 11:38:40.010434  212092 addons.go:234] Setting addon yakd=true in "addons-431563"
	I0115 11:38:40.010437  212092 mustload.go:65] Loading cluster: addons-431563
	I0115 11:38:40.010427  212092 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-431563"
	I0115 11:38:40.010438  212092 addons.go:69] Setting cloud-spanner=true in profile "addons-431563"
	I0115 11:38:40.010471  212092 addons.go:234] Setting addon cloud-spanner=true in "addons-431563"
	I0115 11:38:40.010499  212092 host.go:66] Checking if "addons-431563" exists ...
	I0115 11:38:40.010508  212092 addons.go:69] Setting default-storageclass=true in profile "addons-431563"
	I0115 11:38:40.010522  212092 host.go:66] Checking if "addons-431563" exists ...
	I0115 11:38:40.010523  212092 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-431563"
	I0115 11:38:40.010533  212092 addons.go:69] Setting metrics-server=true in profile "addons-431563"
	I0115 11:38:40.010549  212092 addons.go:234] Setting addon metrics-server=true in "addons-431563"
	I0115 11:38:40.010595  212092 config.go:182] Loaded profile config "addons-431563": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 11:38:40.010630  212092 host.go:66] Checking if "addons-431563" exists ...
	I0115 11:38:40.010974  212092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:38:40.010986  212092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:38:40.010996  212092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:38:40.010999  212092 addons.go:69] Setting ingress=true in profile "addons-431563"
	I0115 11:38:40.011008  212092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:38:40.011008  212092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:38:40.011018  212092 addons.go:234] Setting addon ingress=true in "addons-431563"
	I0115 11:38:40.011020  212092 addons.go:69] Setting ingress-dns=true in profile "addons-431563"
	I0115 11:38:40.011029  212092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:38:40.011033  212092 addons.go:234] Setting addon ingress-dns=true in "addons-431563"
	I0115 11:38:40.011034  212092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:38:40.011060  212092 host.go:66] Checking if "addons-431563" exists ...
	I0115 11:38:40.011094  212092 addons.go:69] Setting inspektor-gadget=true in profile "addons-431563"
	I0115 11:38:40.011083  212092 addons.go:69] Setting helm-tiller=true in profile "addons-431563"
	I0115 11:38:40.011106  212092 addons.go:69] Setting registry=true in profile "addons-431563"
	I0115 11:38:40.010522  212092 config.go:182] Loaded profile config "addons-431563": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 11:38:40.011116  212092 addons.go:234] Setting addon helm-tiller=true in "addons-431563"
	I0115 11:38:40.011118  212092 addons.go:234] Setting addon registry=true in "addons-431563"
	I0115 11:38:40.011117  212092 host.go:66] Checking if "addons-431563" exists ...
	I0115 11:38:40.011107  212092 addons.go:234] Setting addon inspektor-gadget=true in "addons-431563"
	I0115 11:38:40.011154  212092 host.go:66] Checking if "addons-431563" exists ...
	I0115 11:38:40.011100  212092 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-431563"
	I0115 11:38:40.011166  212092 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-431563"
	I0115 11:38:40.011174  212092 host.go:66] Checking if "addons-431563" exists ...
	I0115 11:38:40.011193  212092 host.go:66] Checking if "addons-431563" exists ...
	I0115 11:38:40.011438  212092 addons.go:69] Setting volumesnapshots=true in profile "addons-431563"
	I0115 11:38:40.011462  212092 addons.go:234] Setting addon volumesnapshots=true in "addons-431563"
	I0115 11:38:40.011492  212092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:38:40.011492  212092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:38:40.011502  212092 host.go:66] Checking if "addons-431563" exists ...
	I0115 11:38:40.011511  212092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:38:40.011510  212092 addons.go:69] Setting storage-provisioner=true in profile "addons-431563"
	I0115 11:38:40.011514  212092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:38:40.011526  212092 addons.go:234] Setting addon storage-provisioner=true in "addons-431563"
	I0115 11:38:40.011525  212092 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-431563"
	I0115 11:38:40.011532  212092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:38:40.011541  212092 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-431563"
	I0115 11:38:40.011095  212092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:38:40.011552  212092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:38:40.011563  212092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:38:40.011154  212092 host.go:66] Checking if "addons-431563" exists ...
	I0115 11:38:40.011512  212092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:38:40.011493  212092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:38:40.011806  212092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:38:40.010499  212092 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-431563"
	I0115 11:38:40.010987  212092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:38:40.011863  212092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:38:40.011898  212092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:38:40.011940  212092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:38:40.011966  212092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:38:40.011864  212092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:38:40.012000  212092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:38:40.012027  212092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:38:40.012111  212092 host.go:66] Checking if "addons-431563" exists ...
	I0115 11:38:40.012124  212092 host.go:66] Checking if "addons-431563" exists ...
	I0115 11:38:40.012444  212092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:38:40.012462  212092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:38:40.012482  212092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:38:40.012501  212092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:38:40.026869  212092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44933
	I0115 11:38:40.027699  212092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40889
	I0115 11:38:40.043835  212092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39201
	I0115 11:38:40.043976  212092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45671
	I0115 11:38:40.045683  212092 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:38:40.045797  212092 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:38:40.046132  212092 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:38:40.046408  212092 main.go:141] libmachine: Using API Version  1
	I0115 11:38:40.046432  212092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:38:40.046575  212092 main.go:141] libmachine: Using API Version  1
	I0115 11:38:40.046595  212092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:38:40.046741  212092 main.go:141] libmachine: Using API Version  1
	I0115 11:38:40.046754  212092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:38:40.046820  212092 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:38:40.046904  212092 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:38:40.047521  212092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:38:40.047568  212092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:38:40.056146  212092 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:38:40.056241  212092 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:38:40.056391  212092 main.go:141] libmachine: Using API Version  1
	I0115 11:38:40.056406  212092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:38:40.056883  212092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:38:40.056935  212092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:38:40.057754  212092 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:38:40.057802  212092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:38:40.057822  212092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:38:40.058434  212092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:38:40.058490  212092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:38:40.079811  212092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40839
	I0115 11:38:40.079839  212092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41735
	I0115 11:38:40.080472  212092 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:38:40.080668  212092 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:38:40.081250  212092 main.go:141] libmachine: Using API Version  1
	I0115 11:38:40.081273  212092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:38:40.081288  212092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45941
	I0115 11:38:40.081904  212092 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:38:40.081998  212092 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:38:40.083052  212092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:38:40.083105  212092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:38:40.083408  212092 main.go:141] libmachine: Using API Version  1
	I0115 11:38:40.083424  212092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:38:40.083495  212092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45319
	I0115 11:38:40.083772  212092 main.go:141] libmachine: Using API Version  1
	I0115 11:38:40.083787  212092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:38:40.084145  212092 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:38:40.084211  212092 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:38:40.084812  212092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:38:40.084853  212092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:38:40.085193  212092 main.go:141] libmachine: Using API Version  1
	I0115 11:38:40.085212  212092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:38:40.085279  212092 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:38:40.085919  212092 main.go:141] libmachine: (addons-431563) Calling .GetState
	I0115 11:38:40.085995  212092 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:38:40.086724  212092 main.go:141] libmachine: (addons-431563) Calling .GetState
	I0115 11:38:40.086784  212092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35631
	I0115 11:38:40.087133  212092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39151
	I0115 11:38:40.087341  212092 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:38:40.087883  212092 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:38:40.088048  212092 main.go:141] libmachine: Using API Version  1
	I0115 11:38:40.088062  212092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:38:40.089116  212092 addons.go:234] Setting addon default-storageclass=true in "addons-431563"
	I0115 11:38:40.089153  212092 host.go:66] Checking if "addons-431563" exists ...
	I0115 11:38:40.089554  212092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:38:40.089588  212092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:38:40.089887  212092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37753
	I0115 11:38:40.090008  212092 host.go:66] Checking if "addons-431563" exists ...
	I0115 11:38:40.090440  212092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43037
	I0115 11:38:40.090714  212092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:38:40.090765  212092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:38:40.091002  212092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38771
	I0115 11:38:40.091347  212092 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:38:40.091792  212092 main.go:141] libmachine: Using API Version  1
	I0115 11:38:40.091813  212092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:38:40.091956  212092 main.go:141] libmachine: Using API Version  1
	I0115 11:38:40.091977  212092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:38:40.092132  212092 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:38:40.092681  212092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:38:40.092718  212092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:38:40.092912  212092 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:38:40.092992  212092 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:38:40.092997  212092 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:38:40.093486  212092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:38:40.093519  212092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:38:40.093706  212092 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:38:40.094235  212092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:38:40.094271  212092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:38:40.096176  212092 main.go:141] libmachine: Using API Version  1
	I0115 11:38:40.096194  212092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:38:40.096322  212092 main.go:141] libmachine: Using API Version  1
	I0115 11:38:40.096336  212092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:38:40.096482  212092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40423
	I0115 11:38:40.096780  212092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37625
	I0115 11:38:40.096956  212092 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:38:40.097012  212092 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:38:40.097602  212092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:38:40.097641  212092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:38:40.097826  212092 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:38:40.098060  212092 main.go:141] libmachine: (addons-431563) Calling .GetState
	I0115 11:38:40.098136  212092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46843
	I0115 11:38:40.098377  212092 main.go:141] libmachine: Using API Version  1
	I0115 11:38:40.098392  212092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:38:40.098573  212092 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:38:40.098988  212092 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:38:40.099152  212092 main.go:141] libmachine: (addons-431563) Calling .GetState
	I0115 11:38:40.099431  212092 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:38:40.100122  212092 main.go:141] libmachine: Using API Version  1
	I0115 11:38:40.100140  212092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:38:40.100598  212092 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:38:40.101177  212092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:38:40.101213  212092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:38:40.101515  212092 main.go:141] libmachine: Using API Version  1
	I0115 11:38:40.101530  212092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:38:40.102113  212092 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:38:40.102306  212092 main.go:141] libmachine: (addons-431563) Calling .GetState
	I0115 11:38:40.102427  212092 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-431563"
	I0115 11:38:40.102475  212092 host.go:66] Checking if "addons-431563" exists ...
	I0115 11:38:40.102743  212092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34425
	I0115 11:38:40.102846  212092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:38:40.102888  212092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:38:40.103352  212092 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:38:40.103879  212092 main.go:141] libmachine: Using API Version  1
	I0115 11:38:40.103899  212092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:38:40.104234  212092 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:38:40.104285  212092 main.go:141] libmachine: (addons-431563) Calling .DriverName
	I0115 11:38:40.107240  212092 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0115 11:38:40.105150  212092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:38:40.105186  212092 main.go:141] libmachine: (addons-431563) Calling .DriverName
	I0115 11:38:40.108664  212092 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0115 11:38:40.108680  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0115 11:38:40.108684  212092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:38:40.108702  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHHostname
	I0115 11:38:40.110408  212092 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I0115 11:38:40.111756  212092 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0115 11:38:40.111774  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0115 11:38:40.111795  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHHostname
	I0115 11:38:40.112003  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:40.113000  212092 main.go:141] libmachine: (addons-431563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6a:05", ip: ""} in network mk-addons-431563: {Iface:virbr1 ExpiryTime:2024-01-15 12:37:57 +0000 UTC Type:0 Mac:52:54:00:0f:6a:05 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-431563 Clientid:01:52:54:00:0f:6a:05}
	I0115 11:38:40.113772  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined IP address 192.168.39.212 and MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:40.114967  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:40.115555  212092 main.go:141] libmachine: (addons-431563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6a:05", ip: ""} in network mk-addons-431563: {Iface:virbr1 ExpiryTime:2024-01-15 12:37:57 +0000 UTC Type:0 Mac:52:54:00:0f:6a:05 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-431563 Clientid:01:52:54:00:0f:6a:05}
	I0115 11:38:40.115586  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined IP address 192.168.39.212 and MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:40.115800  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHPort
	I0115 11:38:40.116022  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHKeyPath
	I0115 11:38:40.116088  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHPort
	I0115 11:38:40.116260  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHUsername
	I0115 11:38:40.116344  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHKeyPath
	I0115 11:38:40.116487  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHUsername
	I0115 11:38:40.116541  212092 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17957-203994/.minikube/machines/addons-431563/id_rsa Username:docker}
	I0115 11:38:40.117252  212092 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17957-203994/.minikube/machines/addons-431563/id_rsa Username:docker}
	I0115 11:38:40.119875  212092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35169
	I0115 11:38:40.120049  212092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43171
	I0115 11:38:40.120479  212092 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:38:40.120800  212092 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:38:40.121328  212092 main.go:141] libmachine: Using API Version  1
	I0115 11:38:40.121347  212092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:38:40.121458  212092 main.go:141] libmachine: Using API Version  1
	I0115 11:38:40.121475  212092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:38:40.121729  212092 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:38:40.122286  212092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:38:40.122328  212092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:38:40.122610  212092 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:38:40.122995  212092 main.go:141] libmachine: (addons-431563) Calling .GetState
	I0115 11:38:40.124913  212092 main.go:141] libmachine: (addons-431563) Calling .DriverName
	I0115 11:38:40.126752  212092 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0115 11:38:40.128481  212092 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0115 11:38:40.128506  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0115 11:38:40.128529  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHHostname
	I0115 11:38:40.126041  212092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34639
	I0115 11:38:40.126501  212092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43439
	I0115 11:38:40.129209  212092 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:38:40.129960  212092 main.go:141] libmachine: Using API Version  1
	I0115 11:38:40.129985  212092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:38:40.130086  212092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40555
	I0115 11:38:40.130660  212092 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:38:40.131180  212092 main.go:141] libmachine: Using API Version  1
	I0115 11:38:40.131200  212092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:38:40.131648  212092 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:38:40.131703  212092 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:38:40.131743  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:40.131937  212092 main.go:141] libmachine: (addons-431563) Calling .GetState
	I0115 11:38:40.132551  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHPort
	I0115 11:38:40.132624  212092 main.go:141] libmachine: (addons-431563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6a:05", ip: ""} in network mk-addons-431563: {Iface:virbr1 ExpiryTime:2024-01-15 12:37:57 +0000 UTC Type:0 Mac:52:54:00:0f:6a:05 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-431563 Clientid:01:52:54:00:0f:6a:05}
	I0115 11:38:40.132641  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined IP address 192.168.39.212 and MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:40.132745  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHKeyPath
	I0115 11:38:40.132914  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHUsername
	I0115 11:38:40.132983  212092 main.go:141] libmachine: (addons-431563) Calling .GetState
	I0115 11:38:40.133180  212092 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17957-203994/.minikube/machines/addons-431563/id_rsa Username:docker}
	I0115 11:38:40.133730  212092 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:38:40.133752  212092 main.go:141] libmachine: (addons-431563) Calling .DriverName
	I0115 11:38:40.136804  212092 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0115 11:38:40.134339  212092 main.go:141] libmachine: Using API Version  1
	I0115 11:38:40.136622  212092 main.go:141] libmachine: (addons-431563) Calling .DriverName
	I0115 11:38:40.136663  212092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46489
	I0115 11:38:40.138568  212092 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0115 11:38:40.138582  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0115 11:38:40.138604  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHHostname
	I0115 11:38:40.138661  212092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:38:40.140736  212092 out.go:177]   - Using image docker.io/registry:2.8.3
	I0115 11:38:40.139836  212092 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:38:40.140165  212092 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:38:40.141217  212092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39313
	I0115 11:38:40.142646  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:40.144740  212092 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0115 11:38:40.143459  212092 main.go:141] libmachine: (addons-431563) Calling .GetState
	I0115 11:38:40.143495  212092 main.go:141] libmachine: (addons-431563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6a:05", ip: ""} in network mk-addons-431563: {Iface:virbr1 ExpiryTime:2024-01-15 12:37:57 +0000 UTC Type:0 Mac:52:54:00:0f:6a:05 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-431563 Clientid:01:52:54:00:0f:6a:05}
	I0115 11:38:40.143599  212092 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:38:40.143823  212092 main.go:141] libmachine: Using API Version  1
	I0115 11:38:40.143924  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHPort
	I0115 11:38:40.145053  212092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37681
	I0115 11:38:40.146113  212092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42793
	I0115 11:38:40.146145  212092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37121
	I0115 11:38:40.146489  212092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:38:40.146534  212092 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0115 11:38:40.146549  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0115 11:38:40.146570  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHHostname
	I0115 11:38:40.146616  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined IP address 192.168.39.212 and MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:40.146988  212092 main.go:141] libmachine: Using API Version  1
	I0115 11:38:40.147013  212092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:38:40.147076  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHKeyPath
	I0115 11:38:40.147144  212092 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:38:40.147214  212092 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:38:40.148084  212092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40307
	I0115 11:38:40.148150  212092 main.go:141] libmachine: Using API Version  1
	I0115 11:38:40.148163  212092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:38:40.148203  212092 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:38:40.148254  212092 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:38:40.148290  212092 main.go:141] libmachine: Using API Version  1
	I0115 11:38:40.148301  212092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:38:40.148304  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHUsername
	I0115 11:38:40.148402  212092 main.go:141] libmachine: (addons-431563) Calling .GetState
	I0115 11:38:40.148522  212092 main.go:141] libmachine: (addons-431563) Calling .DriverName
	I0115 11:38:40.148521  212092 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17957-203994/.minikube/machines/addons-431563/id_rsa Username:docker}
	I0115 11:38:40.148676  212092 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:38:40.148737  212092 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:38:40.148923  212092 main.go:141] libmachine: (addons-431563) Calling .GetState
	I0115 11:38:40.148966  212092 main.go:141] libmachine: (addons-431563) Calling .GetState
	I0115 11:38:40.149018  212092 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:38:40.150074  212092 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:38:40.150230  212092 main.go:141] libmachine: Using API Version  1
	I0115 11:38:40.150261  212092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:38:40.150570  212092 main.go:141] libmachine: Using API Version  1
	I0115 11:38:40.150589  212092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:38:40.150708  212092 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:38:40.151007  212092 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:38:40.151174  212092 main.go:141] libmachine: (addons-431563) Calling .GetState
	I0115 11:38:40.151850  212092 main.go:141] libmachine: (addons-431563) Calling .DriverName
	I0115 11:38:40.151922  212092 main.go:141] libmachine: (addons-431563) Calling .DriverName
	I0115 11:38:40.152028  212092 main.go:141] libmachine: (addons-431563) Calling .DriverName
	I0115 11:38:40.152028  212092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:38:40.152085  212092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:38:40.154035  212092 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 11:38:40.155552  212092 main.go:141] libmachine: (addons-431563) Calling .DriverName
	I0115 11:38:40.154180  212092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35823
	I0115 11:38:40.155695  212092 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 11:38:40.155714  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0115 11:38:40.155732  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHHostname
	I0115 11:38:40.157234  212092 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0115 11:38:40.153133  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:40.157284  212092 main.go:141] libmachine: (addons-431563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6a:05", ip: ""} in network mk-addons-431563: {Iface:virbr1 ExpiryTime:2024-01-15 12:37:57 +0000 UTC Type:0 Mac:52:54:00:0f:6a:05 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-431563 Clientid:01:52:54:00:0f:6a:05}
	I0115 11:38:40.157310  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined IP address 192.168.39.212 and MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:40.153650  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHPort
	I0115 11:38:40.153775  212092 main.go:141] libmachine: (addons-431563) Calling .DriverName
	I0115 11:38:40.158738  212092 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0115 11:38:40.156540  212092 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:38:40.156940  212092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44667
	I0115 11:38:40.157601  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHKeyPath
	I0115 11:38:40.158879  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:40.158891  212092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46313
	I0115 11:38:40.159487  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHPort
	I0115 11:38:40.160256  212092 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0115 11:38:40.160668  212092 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:38:40.160739  212092 main.go:141] libmachine: Using API Version  1
	I0115 11:38:40.162060  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHUsername
	I0115 11:38:40.163600  212092 main.go:141] libmachine: (addons-431563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6a:05", ip: ""} in network mk-addons-431563: {Iface:virbr1 ExpiryTime:2024-01-15 12:37:57 +0000 UTC Type:0 Mac:52:54:00:0f:6a:05 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-431563 Clientid:01:52:54:00:0f:6a:05}
	I0115 11:38:40.163615  212092 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0115 11:38:40.163864  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHKeyPath
	I0115 11:38:40.164118  212092 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:38:40.165020  212092 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0115 11:38:40.165040  212092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:38:40.165545  212092 main.go:141] libmachine: Using API Version  1
	I0115 11:38:40.166591  212092 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0115 11:38:40.166630  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined IP address 192.168.39.212 and MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:40.166773  212092 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17957-203994/.minikube/machines/addons-431563/id_rsa Username:docker}
	I0115 11:38:40.166821  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHUsername
	I0115 11:38:40.167179  212092 main.go:141] libmachine: Using API Version  1
	I0115 11:38:40.168083  212092 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0115 11:38:40.168100  212092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:38:40.168344  212092 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:38:40.169246  212092 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0115 11:38:40.169273  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0115 11:38:40.169289  212092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:38:40.169499  212092 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17957-203994/.minikube/machines/addons-431563/id_rsa Username:docker}
	I0115 11:38:40.171762  212092 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0115 11:38:40.170512  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0115 11:38:40.170533  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHHostname
	I0115 11:38:40.169742  212092 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:38:40.170754  212092 main.go:141] libmachine: (addons-431563) Calling .GetState
	I0115 11:38:40.170917  212092 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:38:40.171466  212092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42351
	I0115 11:38:40.174564  212092 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0115 11:38:40.173175  212092 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0115 11:38:40.173206  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHHostname
	I0115 11:38:40.173488  212092 main.go:141] libmachine: (addons-431563) Calling .GetState
	I0115 11:38:40.173493  212092 main.go:141] libmachine: (addons-431563) Calling .GetState
	I0115 11:38:40.173788  212092 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:38:40.175173  212092 main.go:141] libmachine: (addons-431563) Calling .DriverName
	I0115 11:38:40.178647  212092 main.go:141] libmachine: (addons-431563) Calling .DriverName
	I0115 11:38:40.178772  212092 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0115 11:38:40.178797  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0115 11:38:40.178810  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHHostname
	I0115 11:38:40.180178  212092 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0115 11:38:40.176904  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:40.177641  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHPort
	I0115 11:38:40.177871  212092 main.go:141] libmachine: Using API Version  1
	I0115 11:38:40.179449  212092 main.go:141] libmachine: (addons-431563) Calling .DriverName
	I0115 11:38:40.180855  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:40.181473  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHPort
	I0115 11:38:40.181690  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:40.181768  212092 main.go:141] libmachine: (addons-431563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6a:05", ip: ""} in network mk-addons-431563: {Iface:virbr1 ExpiryTime:2024-01-15 12:37:57 +0000 UTC Type:0 Mac:52:54:00:0f:6a:05 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-431563 Clientid:01:52:54:00:0f:6a:05}
	I0115 11:38:40.182210  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHPort
	I0115 11:38:40.183017  212092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:38:40.183022  212092 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0115 11:38:40.183172  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHKeyPath
	I0115 11:38:40.184196  212092 main.go:141] libmachine: (addons-431563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6a:05", ip: ""} in network mk-addons-431563: {Iface:virbr1 ExpiryTime:2024-01-15 12:37:57 +0000 UTC Type:0 Mac:52:54:00:0f:6a:05 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-431563 Clientid:01:52:54:00:0f:6a:05}
	I0115 11:38:40.184202  212092 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0115 11:38:40.184239  212092 main.go:141] libmachine: (addons-431563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6a:05", ip: ""} in network mk-addons-431563: {Iface:virbr1 ExpiryTime:2024-01-15 12:37:57 +0000 UTC Type:0 Mac:52:54:00:0f:6a:05 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-431563 Clientid:01:52:54:00:0f:6a:05}
	I0115 11:38:40.184391  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHKeyPath
	I0115 11:38:40.184447  212092 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0115 11:38:40.185437  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0115 11:38:40.185451  212092 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0115 11:38:40.185456  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHHostname
	I0115 11:38:40.185511  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined IP address 192.168.39.212 and MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:40.185523  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined IP address 192.168.39.212 and MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:40.185544  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined IP address 192.168.39.212 and MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:40.185700  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHUsername
	I0115 11:38:40.185714  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHUsername
	I0115 11:38:40.185762  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHKeyPath
	I0115 11:38:40.186436  212092 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:38:40.186793  212092 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0115 11:38:40.187996  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:40.188128  212092 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0115 11:38:40.188168  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0115 11:38:40.188186  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHHostname
	I0115 11:38:40.188299  212092 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0115 11:38:40.188310  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0115 11:38:40.188318  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHHostname
	I0115 11:38:40.189975  212092 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0115 11:38:40.191601  212092 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0115 11:38:40.191621  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0115 11:38:40.191657  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHHostname
	I0115 11:38:40.189074  212092 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17957-203994/.minikube/machines/addons-431563/id_rsa Username:docker}
	I0115 11:38:40.189090  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHPort
	I0115 11:38:40.189104  212092 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17957-203994/.minikube/machines/addons-431563/id_rsa Username:docker}
	I0115 11:38:40.189120  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHUsername
	I0115 11:38:40.189135  212092 main.go:141] libmachine: (addons-431563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6a:05", ip: ""} in network mk-addons-431563: {Iface:virbr1 ExpiryTime:2024-01-15 12:37:57 +0000 UTC Type:0 Mac:52:54:00:0f:6a:05 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-431563 Clientid:01:52:54:00:0f:6a:05}
	I0115 11:38:40.192043  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined IP address 192.168.39.212 and MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:40.189663  212092 main.go:141] libmachine: (addons-431563) Calling .GetState
	I0115 11:38:40.192373  212092 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17957-203994/.minikube/machines/addons-431563/id_rsa Username:docker}
	I0115 11:38:40.192808  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:40.192840  212092 main.go:141] libmachine: (addons-431563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6a:05", ip: ""} in network mk-addons-431563: {Iface:virbr1 ExpiryTime:2024-01-15 12:37:57 +0000 UTC Type:0 Mac:52:54:00:0f:6a:05 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-431563 Clientid:01:52:54:00:0f:6a:05}
	I0115 11:38:40.192854  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHKeyPath
	I0115 11:38:40.192861  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined IP address 192.168.39.212 and MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:40.192930  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHPort
	I0115 11:38:40.193150  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHKeyPath
	I0115 11:38:40.193455  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHUsername
	I0115 11:38:40.193817  212092 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17957-203994/.minikube/machines/addons-431563/id_rsa Username:docker}
	I0115 11:38:40.194027  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHUsername
	I0115 11:38:40.194085  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:40.194251  212092 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17957-203994/.minikube/machines/addons-431563/id_rsa Username:docker}
	I0115 11:38:40.194423  212092 main.go:141] libmachine: (addons-431563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6a:05", ip: ""} in network mk-addons-431563: {Iface:virbr1 ExpiryTime:2024-01-15 12:37:57 +0000 UTC Type:0 Mac:52:54:00:0f:6a:05 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-431563 Clientid:01:52:54:00:0f:6a:05}
	I0115 11:38:40.194453  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined IP address 192.168.39.212 and MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:40.194601  212092 main.go:141] libmachine: (addons-431563) Calling .DriverName
	I0115 11:38:40.194659  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHPort
	I0115 11:38:40.196483  212092 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0115 11:38:40.194835  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHKeyPath
	I0115 11:38:40.196121  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:40.196774  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHPort
	I0115 11:38:40.198027  212092 main.go:141] libmachine: (addons-431563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6a:05", ip: ""} in network mk-addons-431563: {Iface:virbr1 ExpiryTime:2024-01-15 12:37:57 +0000 UTC Type:0 Mac:52:54:00:0f:6a:05 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-431563 Clientid:01:52:54:00:0f:6a:05}
	I0115 11:38:40.198053  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined IP address 192.168.39.212 and MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:40.199918  212092 out.go:177]   - Using image docker.io/busybox:stable
	I0115 11:38:40.198211  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHKeyPath
	I0115 11:38:40.198247  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHUsername
	I0115 11:38:40.201233  212092 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0115 11:38:40.201255  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0115 11:38:40.201279  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHHostname
	I0115 11:38:40.201289  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHUsername
	I0115 11:38:40.201288  212092 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17957-203994/.minikube/machines/addons-431563/id_rsa Username:docker}
	I0115 11:38:40.201880  212092 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17957-203994/.minikube/machines/addons-431563/id_rsa Username:docker}
	I0115 11:38:40.204461  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:40.204858  212092 main.go:141] libmachine: (addons-431563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6a:05", ip: ""} in network mk-addons-431563: {Iface:virbr1 ExpiryTime:2024-01-15 12:37:57 +0000 UTC Type:0 Mac:52:54:00:0f:6a:05 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-431563 Clientid:01:52:54:00:0f:6a:05}
	I0115 11:38:40.204885  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined IP address 192.168.39.212 and MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:40.205035  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHPort
	I0115 11:38:40.205227  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHKeyPath
	I0115 11:38:40.205407  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHUsername
	I0115 11:38:40.205570  212092 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17957-203994/.minikube/machines/addons-431563/id_rsa Username:docker}
	W0115 11:38:40.207797  212092 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:41166->192.168.39.212:22: read: connection reset by peer
	I0115 11:38:40.207828  212092 retry.go:31] will retry after 358.501234ms: ssh: handshake failed: read tcp 192.168.39.1:41166->192.168.39.212:22: read: connection reset by peer
	W0115 11:38:40.208129  212092 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:41180->192.168.39.212:22: read: connection reset by peer
	I0115 11:38:40.208150  212092 retry.go:31] will retry after 187.046571ms: ssh: handshake failed: read tcp 192.168.39.1:41180->192.168.39.212:22: read: connection reset by peer
	I0115 11:38:40.492269  212092 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0115 11:38:40.492298  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0115 11:38:40.504284  212092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0115 11:38:40.515704  212092 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-431563" context rescaled to 1 replicas
	I0115 11:38:40.515746  212092 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0115 11:38:40.517750  212092 out.go:177] * Verifying Kubernetes components...
	I0115 11:38:40.519343  212092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 11:38:40.545137  212092 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0115 11:38:40.545164  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0115 11:38:40.607245  212092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0115 11:38:40.640568  212092 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0115 11:38:40.640596  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0115 11:38:40.644473  212092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0115 11:38:40.664490  212092 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0115 11:38:40.664529  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0115 11:38:40.758249  212092 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0115 11:38:40.758291  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0115 11:38:40.787612  212092 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0115 11:38:40.787645  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0115 11:38:40.864069  212092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0115 11:38:40.871166  212092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0115 11:38:40.882548  212092 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0115 11:38:40.882571  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0115 11:38:40.884092  212092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0115 11:38:40.899387  212092 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0115 11:38:40.899407  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0115 11:38:40.902570  212092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 11:38:40.913406  212092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0115 11:38:41.033914  212092 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0115 11:38:41.033945  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0115 11:38:41.037934  212092 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0115 11:38:41.037958  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0115 11:38:41.088235  212092 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0115 11:38:41.088270  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0115 11:38:41.113090  212092 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0115 11:38:41.113117  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0115 11:38:41.361601  212092 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0115 11:38:41.361624  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0115 11:38:41.377318  212092 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0115 11:38:41.377350  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0115 11:38:41.385426  212092 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0115 11:38:41.385447  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0115 11:38:41.402133  212092 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0115 11:38:41.402155  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0115 11:38:41.416714  212092 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0115 11:38:41.416738  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0115 11:38:41.433791  212092 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 11:38:41.433817  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0115 11:38:41.489318  212092 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0115 11:38:41.489343  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0115 11:38:41.500992  212092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0115 11:38:41.513370  212092 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0115 11:38:41.513414  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0115 11:38:41.573028  212092 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0115 11:38:41.573058  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0115 11:38:41.595414  212092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0115 11:38:41.646868  212092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 11:38:41.664357  212092 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0115 11:38:41.664391  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0115 11:38:41.784984  212092 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0115 11:38:41.785011  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0115 11:38:41.842472  212092 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0115 11:38:41.842499  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0115 11:38:41.927092  212092 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0115 11:38:41.927120  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0115 11:38:41.981550  212092 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0115 11:38:41.981578  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0115 11:38:42.029390  212092 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0115 11:38:42.029417  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0115 11:38:42.314830  212092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0115 11:38:42.499988  212092 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0115 11:38:42.500025  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0115 11:38:42.526924  212092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0115 11:38:42.529503  212092 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0115 11:38:42.529529  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0115 11:38:42.910786  212092 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0115 11:38:42.910811  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0115 11:38:43.281627  212092 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0115 11:38:43.281657  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0115 11:38:43.352198  212092 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0115 11:38:43.352233  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0115 11:38:43.544667  212092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0115 11:38:43.602983  212092 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0115 11:38:43.603018  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0115 11:38:43.810387  212092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0115 11:38:46.773374  212092 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0115 11:38:46.773429  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHHostname
	I0115 11:38:46.776661  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:46.777109  212092 main.go:141] libmachine: (addons-431563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6a:05", ip: ""} in network mk-addons-431563: {Iface:virbr1 ExpiryTime:2024-01-15 12:37:57 +0000 UTC Type:0 Mac:52:54:00:0f:6a:05 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-431563 Clientid:01:52:54:00:0f:6a:05}
	I0115 11:38:46.777138  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined IP address 192.168.39.212 and MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:46.777324  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHPort
	I0115 11:38:46.777583  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHKeyPath
	I0115 11:38:46.777805  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHUsername
	I0115 11:38:46.777985  212092 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17957-203994/.minikube/machines/addons-431563/id_rsa Username:docker}
	I0115 11:38:47.551008  212092 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0115 11:38:47.944873  212092 addons.go:234] Setting addon gcp-auth=true in "addons-431563"
	I0115 11:38:47.944968  212092 host.go:66] Checking if "addons-431563" exists ...
	I0115 11:38:47.945454  212092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:38:47.945509  212092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:38:47.961153  212092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38757
	I0115 11:38:47.961599  212092 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:38:47.962210  212092 main.go:141] libmachine: Using API Version  1
	I0115 11:38:47.962249  212092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:38:47.962673  212092 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:38:47.963188  212092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:38:47.963217  212092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:38:47.979594  212092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39495
	I0115 11:38:47.980229  212092 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:38:47.980758  212092 main.go:141] libmachine: Using API Version  1
	I0115 11:38:47.980782  212092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:38:47.981169  212092 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:38:47.981375  212092 main.go:141] libmachine: (addons-431563) Calling .GetState
	I0115 11:38:47.983197  212092 main.go:141] libmachine: (addons-431563) Calling .DriverName
	I0115 11:38:47.983500  212092 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0115 11:38:47.983525  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHHostname
	I0115 11:38:47.986835  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:47.987386  212092 main.go:141] libmachine: (addons-431563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6a:05", ip: ""} in network mk-addons-431563: {Iface:virbr1 ExpiryTime:2024-01-15 12:37:57 +0000 UTC Type:0 Mac:52:54:00:0f:6a:05 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-431563 Clientid:01:52:54:00:0f:6a:05}
	I0115 11:38:47.987417  212092 main.go:141] libmachine: (addons-431563) DBG | domain addons-431563 has defined IP address 192.168.39.212 and MAC address 52:54:00:0f:6a:05 in network mk-addons-431563
	I0115 11:38:47.987603  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHPort
	I0115 11:38:47.987818  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHKeyPath
	I0115 11:38:47.988005  212092 main.go:141] libmachine: (addons-431563) Calling .GetSSHUsername
	I0115 11:38:47.988233  212092 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17957-203994/.minikube/machines/addons-431563/id_rsa Username:docker}
	I0115 11:38:51.967671  212092 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (11.44829358s)
	I0115 11:38:51.967696  212092 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (11.360411949s)
	I0115 11:38:51.967717  212092 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0115 11:38:51.967793  212092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (11.323279685s)
	I0115 11:38:51.967851  212092 main.go:141] libmachine: Making call to close driver server
	I0115 11:38:51.967896  212092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.096701293s)
	I0115 11:38:51.967909  212092 main.go:141] libmachine: (addons-431563) Calling .Close
	I0115 11:38:51.967931  212092 main.go:141] libmachine: Making call to close driver server
	I0115 11:38:51.967948  212092 main.go:141] libmachine: (addons-431563) Calling .Close
	I0115 11:38:51.967854  212092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (11.103736046s)
	I0115 11:38:51.968000  212092 main.go:141] libmachine: Making call to close driver server
	I0115 11:38:51.968020  212092 main.go:141] libmachine: (addons-431563) Calling .Close
	I0115 11:38:51.968017  212092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (11.083899844s)
	I0115 11:38:51.968151  212092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.065554215s)
	I0115 11:38:51.968191  212092 main.go:141] libmachine: Making call to close driver server
	I0115 11:38:51.968202  212092 main.go:141] libmachine: (addons-431563) Calling .Close
	I0115 11:38:51.968302  212092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.054867911s)
	I0115 11:38:51.968329  212092 main.go:141] libmachine: Making call to close driver server
	I0115 11:38:51.968341  212092 main.go:141] libmachine: (addons-431563) Calling .Close
	I0115 11:38:51.968397  212092 main.go:141] libmachine: Successfully made call to close driver server
	I0115 11:38:51.968421  212092 main.go:141] libmachine: Making call to close driver server
	I0115 11:38:51.968438  212092 main.go:141] libmachine: (addons-431563) Calling .Close
	I0115 11:38:51.968442  212092 main.go:141] libmachine: Successfully made call to close driver server
	I0115 11:38:51.968457  212092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 11:38:51.968467  212092 main.go:141] libmachine: Making call to close driver server
	I0115 11:38:51.968424  212092 main.go:141] libmachine: (addons-431563) DBG | Closing plugin on server side
	I0115 11:38:51.968481  212092 main.go:141] libmachine: (addons-431563) Calling .Close
	I0115 11:38:51.968493  212092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 11:38:51.968506  212092 main.go:141] libmachine: Making call to close driver server
	I0115 11:38:51.968522  212092 main.go:141] libmachine: (addons-431563) Calling .Close
	I0115 11:38:51.968891  212092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (10.467871226s)
	I0115 11:38:51.968907  212092 node_ready.go:35] waiting up to 6m0s for node "addons-431563" to be "Ready" ...
	I0115 11:38:51.968928  212092 main.go:141] libmachine: Making call to close driver server
	I0115 11:38:51.968938  212092 main.go:141] libmachine: (addons-431563) Calling .Close
	I0115 11:38:51.969048  212092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.373607326s)
	I0115 11:38:51.969084  212092 main.go:141] libmachine: Making call to close driver server
	I0115 11:38:51.969106  212092 main.go:141] libmachine: (addons-431563) Calling .Close
	I0115 11:38:51.969127  212092 main.go:141] libmachine: (addons-431563) DBG | Closing plugin on server side
	I0115 11:38:51.969176  212092 main.go:141] libmachine: Successfully made call to close driver server
	I0115 11:38:51.969187  212092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 11:38:51.969358  212092 main.go:141] libmachine: (addons-431563) DBG | Closing plugin on server side
	I0115 11:38:51.969393  212092 main.go:141] libmachine: Successfully made call to close driver server
	I0115 11:38:51.969405  212092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 11:38:51.969414  212092 main.go:141] libmachine: Making call to close driver server
	I0115 11:38:51.969421  212092 main.go:141] libmachine: (addons-431563) Calling .Close
	I0115 11:38:51.969427  212092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.322525596s)
	I0115 11:38:51.969447  212092 main.go:141] libmachine: Making call to close driver server
	I0115 11:38:51.969489  212092 main.go:141] libmachine: (addons-431563) Calling .Close
	I0115 11:38:51.969507  212092 main.go:141] libmachine: (addons-431563) DBG | Closing plugin on server side
	I0115 11:38:51.969537  212092 main.go:141] libmachine: Successfully made call to close driver server
	I0115 11:38:51.969548  212092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 11:38:51.969557  212092 main.go:141] libmachine: Making call to close driver server
	I0115 11:38:51.969564  212092 main.go:141] libmachine: (addons-431563) Calling .Close
	I0115 11:38:51.969616  212092 main.go:141] libmachine: (addons-431563) DBG | Closing plugin on server side
	I0115 11:38:51.969640  212092 main.go:141] libmachine: Successfully made call to close driver server
	I0115 11:38:51.969652  212092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 11:38:51.969661  212092 main.go:141] libmachine: Making call to close driver server
	I0115 11:38:51.969669  212092 main.go:141] libmachine: (addons-431563) Calling .Close
	I0115 11:38:51.969744  212092 main.go:141] libmachine: (addons-431563) DBG | Closing plugin on server side
	I0115 11:38:51.969771  212092 main.go:141] libmachine: Successfully made call to close driver server
	I0115 11:38:51.969780  212092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 11:38:51.969489  212092 main.go:141] libmachine: Successfully made call to close driver server
	I0115 11:38:51.969889  212092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 11:38:51.970021  212092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.655149894s)
	W0115 11:38:51.970066  212092 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0115 11:38:51.970083  212092 retry.go:31] will retry after 199.840707ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0115 11:38:51.969469  212092 main.go:141] libmachine: (addons-431563) DBG | Closing plugin on server side
	I0115 11:38:51.970320  212092 main.go:141] libmachine: (addons-431563) DBG | Closing plugin on server side
	I0115 11:38:51.970374  212092 main.go:141] libmachine: Successfully made call to close driver server
	I0115 11:38:51.970392  212092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 11:38:51.970424  212092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.443462974s)
	I0115 11:38:51.970451  212092 main.go:141] libmachine: Making call to close driver server
	I0115 11:38:51.970469  212092 main.go:141] libmachine: (addons-431563) Calling .Close
	I0115 11:38:51.970576  212092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.425864336s)
	I0115 11:38:51.970595  212092 main.go:141] libmachine: Making call to close driver server
	I0115 11:38:51.970605  212092 main.go:141] libmachine: (addons-431563) Calling .Close
	I0115 11:38:51.970653  212092 main.go:141] libmachine: Successfully made call to close driver server
	I0115 11:38:51.970661  212092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 11:38:51.970670  212092 main.go:141] libmachine: Making call to close driver server
	I0115 11:38:51.970677  212092 main.go:141] libmachine: (addons-431563) Calling .Close
	I0115 11:38:51.971522  212092 main.go:141] libmachine: (addons-431563) DBG | Closing plugin on server side
	I0115 11:38:51.971555  212092 main.go:141] libmachine: Successfully made call to close driver server
	I0115 11:38:51.971564  212092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 11:38:51.973803  212092 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-431563 service yakd-dashboard -n yakd-dashboard
	
	I0115 11:38:51.971813  212092 main.go:141] libmachine: (addons-431563) DBG | Closing plugin on server side
	I0115 11:38:51.971855  212092 main.go:141] libmachine: Successfully made call to close driver server
	I0115 11:38:51.971878  212092 main.go:141] libmachine: (addons-431563) DBG | Closing plugin on server side
	I0115 11:38:51.971887  212092 main.go:141] libmachine: (addons-431563) DBG | Closing plugin on server side
	I0115 11:38:51.971899  212092 main.go:141] libmachine: Successfully made call to close driver server
	I0115 11:38:51.971908  212092 main.go:141] libmachine: Successfully made call to close driver server
	I0115 11:38:51.971929  212092 main.go:141] libmachine: (addons-431563) DBG | Closing plugin on server side
	I0115 11:38:51.971951  212092 main.go:141] libmachine: Successfully made call to close driver server
	I0115 11:38:51.971958  212092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (11.467640124s)
	I0115 11:38:51.971970  212092 main.go:141] libmachine: (addons-431563) DBG | Closing plugin on server side
	I0115 11:38:51.972003  212092 main.go:141] libmachine: Successfully made call to close driver server
	I0115 11:38:51.972034  212092 main.go:141] libmachine: Successfully made call to close driver server
	I0115 11:38:51.975251  212092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 11:38:51.975273  212092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 11:38:51.975275  212092 main.go:141] libmachine: Making call to close driver server
	I0115 11:38:51.975283  212092 main.go:141] libmachine: Making call to close driver server
	I0115 11:38:51.975288  212092 main.go:141] libmachine: (addons-431563) Calling .Close
	I0115 11:38:51.975302  212092 main.go:141] libmachine: (addons-431563) Calling .Close
	I0115 11:38:51.975260  212092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 11:38:51.975367  212092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 11:38:51.975374  212092 main.go:141] libmachine: Making call to close driver server
	I0115 11:38:51.975383  212092 main.go:141] libmachine: (addons-431563) Calling .Close
	I0115 11:38:51.975358  212092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 11:38:51.975432  212092 main.go:141] libmachine: Making call to close driver server
	I0115 11:38:51.975442  212092 main.go:141] libmachine: (addons-431563) Calling .Close
	I0115 11:38:51.975568  212092 main.go:141] libmachine: Making call to close driver server
	I0115 11:38:51.975580  212092 main.go:141] libmachine: (addons-431563) Calling .Close
	I0115 11:38:51.975569  212092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 11:38:51.975620  212092 main.go:141] libmachine: Making call to close driver server
	I0115 11:38:51.975642  212092 main.go:141] libmachine: (addons-431563) Calling .Close
	I0115 11:38:51.977921  212092 main.go:141] libmachine: (addons-431563) DBG | Closing plugin on server side
	I0115 11:38:51.977928  212092 main.go:141] libmachine: (addons-431563) DBG | Closing plugin on server side
	I0115 11:38:51.977942  212092 main.go:141] libmachine: Successfully made call to close driver server
	I0115 11:38:51.977957  212092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 11:38:51.977969  212092 addons.go:470] Verifying addon metrics-server=true in "addons-431563"
	I0115 11:38:51.977999  212092 main.go:141] libmachine: Successfully made call to close driver server
	I0115 11:38:51.978009  212092 main.go:141] libmachine: Successfully made call to close driver server
	I0115 11:38:51.978015  212092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 11:38:51.978021  212092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 11:38:51.978025  212092 main.go:141] libmachine: Making call to close driver server
	I0115 11:38:51.978027  212092 main.go:141] libmachine: (addons-431563) DBG | Closing plugin on server side
	I0115 11:38:51.978034  212092 main.go:141] libmachine: (addons-431563) Calling .Close
	I0115 11:38:51.978057  212092 main.go:141] libmachine: (addons-431563) DBG | Closing plugin on server side
	I0115 11:38:51.978067  212092 main.go:141] libmachine: Successfully made call to close driver server
	I0115 11:38:51.978075  212092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 11:38:51.978075  212092 main.go:141] libmachine: (addons-431563) DBG | Closing plugin on server side
	I0115 11:38:51.978084  212092 addons.go:470] Verifying addon registry=true in "addons-431563"
	I0115 11:38:51.978083  212092 main.go:141] libmachine: Successfully made call to close driver server
	I0115 11:38:51.978102  212092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 11:38:51.980912  212092 out.go:177] * Verifying registry addon...
	I0115 11:38:51.978287  212092 main.go:141] libmachine: (addons-431563) DBG | Closing plugin on server side
	I0115 11:38:51.978313  212092 main.go:141] libmachine: Successfully made call to close driver server
	I0115 11:38:51.978354  212092 main.go:141] libmachine: Successfully made call to close driver server
	I0115 11:38:51.980953  212092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 11:38:51.980987  212092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 11:38:51.982605  212092 addons.go:470] Verifying addon ingress=true in "addons-431563"
	I0115 11:38:51.984343  212092 out.go:177] * Verifying ingress addon...
	I0115 11:38:51.983263  212092 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0115 11:38:51.984555  212092 node_ready.go:49] node "addons-431563" has status "Ready":"True"
	I0115 11:38:51.986033  212092 node_ready.go:38] duration metric: took 17.092292ms waiting for node "addons-431563" to be "Ready" ...
	I0115 11:38:51.986049  212092 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 11:38:51.986748  212092 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0115 11:38:52.021038  212092 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0115 11:38:52.021070  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:52.021250  212092 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0115 11:38:52.021275  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:52.022147  212092 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-f2m28" in "kube-system" namespace to be "Ready" ...
	I0115 11:38:52.025100  212092 main.go:141] libmachine: Making call to close driver server
	I0115 11:38:52.025118  212092 main.go:141] libmachine: (addons-431563) Calling .Close
	I0115 11:38:52.025446  212092 main.go:141] libmachine: (addons-431563) DBG | Closing plugin on server side
	I0115 11:38:52.025477  212092 main.go:141] libmachine: Successfully made call to close driver server
	I0115 11:38:52.025494  212092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 11:38:52.048453  212092 main.go:141] libmachine: Making call to close driver server
	I0115 11:38:52.048480  212092 main.go:141] libmachine: (addons-431563) Calling .Close
	I0115 11:38:52.048825  212092 main.go:141] libmachine: Successfully made call to close driver server
	I0115 11:38:52.048847  212092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 11:38:52.170146  212092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0115 11:38:52.496164  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:52.496527  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:53.002437  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:53.002520  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:53.493768  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:53.494012  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:53.995544  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:54.020587  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:54.035620  212092 pod_ready.go:102] pod "coredns-5dd5756b68-f2m28" in "kube-system" namespace has status "Ready":"False"
	I0115 11:38:54.497067  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:54.497211  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:54.962233  212092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (11.151778771s)
	I0115 11:38:54.962326  212092 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (6.978798303s)
	I0115 11:38:54.962333  212092 main.go:141] libmachine: Making call to close driver server
	I0115 11:38:54.962463  212092 main.go:141] libmachine: (addons-431563) Calling .Close
	I0115 11:38:54.964421  212092 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0115 11:38:54.962788  212092 main.go:141] libmachine: Successfully made call to close driver server
	I0115 11:38:54.962815  212092 main.go:141] libmachine: (addons-431563) DBG | Closing plugin on server side
	I0115 11:38:54.966694  212092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 11:38:54.968157  212092 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0115 11:38:54.966720  212092 main.go:141] libmachine: Making call to close driver server
	I0115 11:38:54.969730  212092 main.go:141] libmachine: (addons-431563) Calling .Close
	I0115 11:38:54.969803  212092 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0115 11:38:54.969824  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0115 11:38:54.970124  212092 main.go:141] libmachine: (addons-431563) DBG | Closing plugin on server side
	I0115 11:38:54.970141  212092 main.go:141] libmachine: Successfully made call to close driver server
	I0115 11:38:54.970156  212092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 11:38:54.970170  212092 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-431563"
	I0115 11:38:54.972034  212092 out.go:177] * Verifying csi-hostpath-driver addon...
	I0115 11:38:54.974030  212092 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0115 11:38:55.000573  212092 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0115 11:38:55.000603  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:55.004038  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:55.004220  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:55.049862  212092 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0115 11:38:55.049896  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0115 11:38:55.120790  212092 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0115 11:38:55.120814  212092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0115 11:38:55.313752  212092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0115 11:38:55.396242  212092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.226039187s)
	I0115 11:38:55.396304  212092 main.go:141] libmachine: Making call to close driver server
	I0115 11:38:55.396315  212092 main.go:141] libmachine: (addons-431563) Calling .Close
	I0115 11:38:55.396635  212092 main.go:141] libmachine: (addons-431563) DBG | Closing plugin on server side
	I0115 11:38:55.396724  212092 main.go:141] libmachine: Successfully made call to close driver server
	I0115 11:38:55.396746  212092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 11:38:55.396762  212092 main.go:141] libmachine: Making call to close driver server
	I0115 11:38:55.396776  212092 main.go:141] libmachine: (addons-431563) Calling .Close
	I0115 11:38:55.397029  212092 main.go:141] libmachine: Successfully made call to close driver server
	I0115 11:38:55.397048  212092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 11:38:55.487364  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:55.490699  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:55.494910  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:55.985659  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:55.992852  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:55.994386  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:56.485786  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:56.495275  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:56.499451  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:56.534871  212092 pod_ready.go:102] pod "coredns-5dd5756b68-f2m28" in "kube-system" namespace has status "Ready":"False"
	I0115 11:38:56.813303  212092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.499492647s)
	I0115 11:38:56.813382  212092 main.go:141] libmachine: Making call to close driver server
	I0115 11:38:56.813404  212092 main.go:141] libmachine: (addons-431563) Calling .Close
	I0115 11:38:56.813831  212092 main.go:141] libmachine: Successfully made call to close driver server
	I0115 11:38:56.813851  212092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 11:38:56.813863  212092 main.go:141] libmachine: Making call to close driver server
	I0115 11:38:56.813873  212092 main.go:141] libmachine: (addons-431563) Calling .Close
	I0115 11:38:56.814130  212092 main.go:141] libmachine: (addons-431563) DBG | Closing plugin on server side
	I0115 11:38:56.814166  212092 main.go:141] libmachine: Successfully made call to close driver server
	I0115 11:38:56.814175  212092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0115 11:38:56.816117  212092 addons.go:470] Verifying addon gcp-auth=true in "addons-431563"
	I0115 11:38:56.817901  212092 out.go:177] * Verifying gcp-auth addon...
	I0115 11:38:56.820335  212092 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0115 11:38:56.827945  212092 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0115 11:38:56.827962  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:56.981379  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:56.991688  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:56.992401  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:57.325176  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:57.483020  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:57.496762  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:57.505028  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:57.825106  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:57.983934  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:57.998294  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:58.008590  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:58.326112  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:58.481091  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:58.490958  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:58.492251  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:58.824216  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:58.984908  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:58.992868  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:58.992903  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:59.028704  212092 pod_ready.go:102] pod "coredns-5dd5756b68-f2m28" in "kube-system" namespace has status "Ready":"False"
	I0115 11:38:59.325695  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:59.480609  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:59.493351  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:59.496183  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:38:59.825187  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:38:59.980066  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:38:59.994281  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:38:59.994507  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:00.326017  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:00.480360  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:00.827355  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:00.829818  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:00.830964  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:00.979652  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:00.992636  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:00.992644  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:01.029011  212092 pod_ready.go:102] pod "coredns-5dd5756b68-f2m28" in "kube-system" namespace has status "Ready":"False"
	I0115 11:39:01.324989  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:01.480410  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:01.491080  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:01.491491  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:01.825590  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:01.990125  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:01.992897  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:01.997577  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:02.325573  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:02.482130  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:02.493786  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:02.496296  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:02.825751  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:02.980752  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:02.993082  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:02.995817  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:03.032017  212092 pod_ready.go:102] pod "coredns-5dd5756b68-f2m28" in "kube-system" namespace has status "Ready":"False"
	I0115 11:39:03.325345  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:03.480171  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:03.492053  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:03.492166  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:03.826264  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:03.980847  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:03.992097  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:03.996852  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:04.325850  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:04.480816  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:04.493218  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:04.493677  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:04.824533  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:04.980101  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:04.992099  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:04.994324  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:05.326359  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:05.481409  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:05.493098  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:05.493267  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:05.529852  212092 pod_ready.go:102] pod "coredns-5dd5756b68-f2m28" in "kube-system" namespace has status "Ready":"False"
	I0115 11:39:05.824577  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:05.986107  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:05.990691  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:05.991987  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:06.324315  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:06.491282  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:06.500364  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:06.500558  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:06.824906  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:06.986423  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:06.991577  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:06.993719  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:07.324434  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:07.481311  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:07.492233  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:07.492855  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:08.181134  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:08.181618  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:08.181827  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:08.185407  212092 pod_ready.go:102] pod "coredns-5dd5756b68-f2m28" in "kube-system" namespace has status "Ready":"False"
	I0115 11:39:08.187822  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:08.325013  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:08.480643  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:08.492672  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:08.496032  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:08.825210  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:08.984274  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:08.998301  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:08.998314  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:09.323938  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:09.480140  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:09.491097  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:09.494057  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:09.825221  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:09.979694  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:09.992194  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:09.997466  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:10.325298  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:10.479565  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:10.493363  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:10.493712  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:10.530094  212092 pod_ready.go:102] pod "coredns-5dd5756b68-f2m28" in "kube-system" namespace has status "Ready":"False"
	I0115 11:39:10.823945  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:10.983496  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:10.998898  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:10.999148  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:11.325048  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:11.482188  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:11.490886  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:11.494093  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:11.824538  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:11.986408  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:11.997691  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:11.998024  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:12.325750  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:12.480818  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:12.491401  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:12.493891  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:12.824250  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:12.980520  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:12.994714  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:12.995413  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:13.034006  212092 pod_ready.go:102] pod "coredns-5dd5756b68-f2m28" in "kube-system" namespace has status "Ready":"False"
	I0115 11:39:13.325081  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:13.481420  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:13.492847  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:13.493155  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:13.825171  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:13.981473  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:13.991330  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:13.991608  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:14.324883  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:14.482036  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:14.492981  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:14.493115  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:14.824645  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:14.981916  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:14.993057  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:14.998120  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:15.325113  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:15.480593  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:15.498561  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:15.498616  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:15.537491  212092 pod_ready.go:102] pod "coredns-5dd5756b68-f2m28" in "kube-system" namespace has status "Ready":"False"
	I0115 11:39:15.824937  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:15.981026  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:15.992639  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:15.996183  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:16.328009  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:16.481892  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:16.493130  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:16.493273  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:16.824901  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:16.980648  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:16.991476  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:16.993592  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:17.324268  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:17.481364  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:17.491735  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:17.492530  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:17.824442  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:17.980978  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:17.992185  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:17.994431  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:18.029828  212092 pod_ready.go:102] pod "coredns-5dd5756b68-f2m28" in "kube-system" namespace has status "Ready":"False"
	I0115 11:39:18.325379  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:18.480015  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:18.492379  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:18.493107  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:18.824577  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:18.980735  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:18.991817  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:18.993866  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:19.324297  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:19.479679  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:19.491054  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:19.491589  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:19.824456  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:19.980367  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:19.991585  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:19.992111  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:20.325865  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:20.480797  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:20.497122  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:20.497203  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:20.889286  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:20.892277  212092 pod_ready.go:102] pod "coredns-5dd5756b68-f2m28" in "kube-system" namespace has status "Ready":"False"
	I0115 11:39:20.980242  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:20.991106  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:20.991386  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:21.030426  212092 pod_ready.go:92] pod "coredns-5dd5756b68-f2m28" in "kube-system" namespace has status "Ready":"True"
	I0115 11:39:21.030465  212092 pod_ready.go:81] duration metric: took 29.008295537s waiting for pod "coredns-5dd5756b68-f2m28" in "kube-system" namespace to be "Ready" ...
	I0115 11:39:21.030480  212092 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-431563" in "kube-system" namespace to be "Ready" ...
	I0115 11:39:21.036155  212092 pod_ready.go:92] pod "etcd-addons-431563" in "kube-system" namespace has status "Ready":"True"
	I0115 11:39:21.036179  212092 pod_ready.go:81] duration metric: took 5.689499ms waiting for pod "etcd-addons-431563" in "kube-system" namespace to be "Ready" ...
	I0115 11:39:21.036192  212092 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-431563" in "kube-system" namespace to be "Ready" ...
	I0115 11:39:21.047332  212092 pod_ready.go:92] pod "kube-apiserver-addons-431563" in "kube-system" namespace has status "Ready":"True"
	I0115 11:39:21.047361  212092 pod_ready.go:81] duration metric: took 11.160835ms waiting for pod "kube-apiserver-addons-431563" in "kube-system" namespace to be "Ready" ...
	I0115 11:39:21.047375  212092 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-431563" in "kube-system" namespace to be "Ready" ...
	I0115 11:39:21.056105  212092 pod_ready.go:92] pod "kube-controller-manager-addons-431563" in "kube-system" namespace has status "Ready":"True"
	I0115 11:39:21.056128  212092 pod_ready.go:81] duration metric: took 8.743673ms waiting for pod "kube-controller-manager-addons-431563" in "kube-system" namespace to be "Ready" ...
	I0115 11:39:21.056143  212092 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fwnrh" in "kube-system" namespace to be "Ready" ...
	I0115 11:39:21.095076  212092 pod_ready.go:92] pod "kube-proxy-fwnrh" in "kube-system" namespace has status "Ready":"True"
	I0115 11:39:21.095107  212092 pod_ready.go:81] duration metric: took 38.956486ms waiting for pod "kube-proxy-fwnrh" in "kube-system" namespace to be "Ready" ...
	I0115 11:39:21.095118  212092 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-431563" in "kube-system" namespace to be "Ready" ...
	I0115 11:39:21.326220  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:21.484097  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:21.492268  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:21.492315  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 11:39:21.495420  212092 pod_ready.go:92] pod "kube-scheduler-addons-431563" in "kube-system" namespace has status "Ready":"True"
	I0115 11:39:21.495438  212092 pod_ready.go:81] duration metric: took 400.314301ms waiting for pod "kube-scheduler-addons-431563" in "kube-system" namespace to be "Ready" ...
	I0115 11:39:21.495449  212092 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-jx62w" in "kube-system" namespace to be "Ready" ...
	I0115 11:39:21.825628  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:21.980417  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:21.993143  212092 kapi.go:107] duration metric: took 30.009877138s to wait for kubernetes.io/minikube-addons=registry ...
	I0115 11:39:21.993747  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:22.325285  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:22.479912  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:22.491716  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:22.824663  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:22.986950  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:22.992306  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:23.324658  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:23.480312  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:23.491394  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:23.501980  212092 pod_ready.go:102] pod "metrics-server-7c66d45ddc-jx62w" in "kube-system" namespace has status "Ready":"False"
	I0115 11:39:23.824739  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:23.982072  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:23.991654  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:24.325291  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:24.483342  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:24.491621  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:24.825201  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:24.984905  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:24.992418  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:25.324931  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:25.480470  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:25.492590  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:25.504981  212092 pod_ready.go:102] pod "metrics-server-7c66d45ddc-jx62w" in "kube-system" namespace has status "Ready":"False"
	I0115 11:39:25.827112  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:25.982503  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:25.992680  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:26.324432  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:26.480565  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:26.491645  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:26.824769  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:26.980428  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:26.992228  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:27.361285  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:27.480544  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:27.490686  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:27.825272  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:27.981114  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:27.990950  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:28.001510  212092 pod_ready.go:102] pod "metrics-server-7c66d45ddc-jx62w" in "kube-system" namespace has status "Ready":"False"
	I0115 11:39:28.324273  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:28.479974  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:28.491093  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:28.826995  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:28.981147  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:28.991986  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:29.324295  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:29.723623  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:29.735322  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:29.824957  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:29.981168  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:29.994228  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:30.003374  212092 pod_ready.go:102] pod "metrics-server-7c66d45ddc-jx62w" in "kube-system" namespace has status "Ready":"False"
	I0115 11:39:30.325659  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:30.480057  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:30.490956  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:30.824650  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:30.981735  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:30.993198  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:31.324875  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:31.480767  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:31.491115  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:31.825191  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:31.990371  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:31.996861  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:32.014071  212092 pod_ready.go:102] pod "metrics-server-7c66d45ddc-jx62w" in "kube-system" namespace has status "Ready":"False"
	I0115 11:39:32.324983  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:32.480277  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:32.491654  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:32.832413  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:32.981201  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:32.997308  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:33.002654  212092 pod_ready.go:92] pod "metrics-server-7c66d45ddc-jx62w" in "kube-system" namespace has status "Ready":"True"
	I0115 11:39:33.002682  212092 pod_ready.go:81] duration metric: took 11.507225046s waiting for pod "metrics-server-7c66d45ddc-jx62w" in "kube-system" namespace to be "Ready" ...
	I0115 11:39:33.002694  212092 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-x5nt7" in "kube-system" namespace to be "Ready" ...
	I0115 11:39:33.007853  212092 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-x5nt7" in "kube-system" namespace has status "Ready":"True"
	I0115 11:39:33.007873  212092 pod_ready.go:81] duration metric: took 5.172407ms waiting for pod "nvidia-device-plugin-daemonset-x5nt7" in "kube-system" namespace to be "Ready" ...
	I0115 11:39:33.007890  212092 pod_ready.go:38] duration metric: took 41.02180248s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 11:39:33.007909  212092 api_server.go:52] waiting for apiserver process to appear ...
	I0115 11:39:33.008010  212092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 11:39:33.033507  212092 api_server.go:72] duration metric: took 52.517727842s to wait for apiserver process to appear ...
	I0115 11:39:33.033540  212092 api_server.go:88] waiting for apiserver healthz status ...
	I0115 11:39:33.033595  212092 api_server.go:253] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0115 11:39:33.039755  212092 api_server.go:279] https://192.168.39.212:8443/healthz returned 200:
	ok
	I0115 11:39:33.041304  212092 api_server.go:141] control plane version: v1.28.4
	I0115 11:39:33.041332  212092 api_server.go:131] duration metric: took 7.783779ms to wait for apiserver health ...
	I0115 11:39:33.041344  212092 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 11:39:33.049376  212092 system_pods.go:59] 18 kube-system pods found
	I0115 11:39:33.049406  212092 system_pods.go:61] "coredns-5dd5756b68-f2m28" [ff26f94b-9996-4aab-9ff9-036a14c67030] Running
	I0115 11:39:33.049417  212092 system_pods.go:61] "csi-hostpath-attacher-0" [db1ef2f8-8de5-4151-84d3-6a64fb2b27e4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0115 11:39:33.049424  212092 system_pods.go:61] "csi-hostpath-resizer-0" [04332f7b-72ff-42e9-88ba-9cc8c9912dd2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0115 11:39:33.049433  212092 system_pods.go:61] "csi-hostpathplugin-s9bm2" [21961aed-af30-464b-847a-ca4132a1289d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0115 11:39:33.049439  212092 system_pods.go:61] "etcd-addons-431563" [425fe5e1-d227-4370-9965-e30798627fe3] Running
	I0115 11:39:33.049443  212092 system_pods.go:61] "kube-apiserver-addons-431563" [9306c11d-5158-4e26-8e74-ffc7f8a2b0f0] Running
	I0115 11:39:33.049453  212092 system_pods.go:61] "kube-controller-manager-addons-431563" [6f4e3b7e-7936-4303-a534-da9fcc1cbb34] Running
	I0115 11:39:33.049458  212092 system_pods.go:61] "kube-ingress-dns-minikube" [4a8ba5ff-b961-4f79-96c9-f78bf26c31f3] Running
	I0115 11:39:33.049464  212092 system_pods.go:61] "kube-proxy-fwnrh" [2d348fb7-53a9-494c-966a-85d80e683244] Running
	I0115 11:39:33.049475  212092 system_pods.go:61] "kube-scheduler-addons-431563" [8a5bceb2-62c6-4720-8967-b4571a20a71c] Running
	I0115 11:39:33.049482  212092 system_pods.go:61] "metrics-server-7c66d45ddc-jx62w" [24b2c90f-46cf-4c50-aabe-d7bba12f8c79] Running
	I0115 11:39:33.049489  212092 system_pods.go:61] "nvidia-device-plugin-daemonset-x5nt7" [a6842468-f8ad-4cc8-8fc7-6ab79386a78a] Running
	I0115 11:39:33.049495  212092 system_pods.go:61] "registry-lb2gb" [dcaf122f-5ed9-41c0-b3af-9ba6f721c0b5] Running
	I0115 11:39:33.049501  212092 system_pods.go:61] "registry-proxy-wwnrn" [593585c9-6917-42ab-9269-d5dfcddaed1b] Running
	I0115 11:39:33.049511  212092 system_pods.go:61] "snapshot-controller-58dbcc7b99-4fsk7" [ed7d6674-3322-4268-aca8-69848fcd6465] Running
	I0115 11:39:33.049518  212092 system_pods.go:61] "snapshot-controller-58dbcc7b99-sr4b5" [8d8c447a-e504-4ae5-a754-dab2535a2544] Running
	I0115 11:39:33.049524  212092 system_pods.go:61] "storage-provisioner" [98a811db-3aa2-47ae-a494-96a738f37083] Running
	I0115 11:39:33.049531  212092 system_pods.go:61] "tiller-deploy-7b677967b9-cmkcx" [3f460238-8c55-491b-a343-20d0169c76f8] Running
	I0115 11:39:33.049538  212092 system_pods.go:74] duration metric: took 8.187316ms to wait for pod list to return data ...
	I0115 11:39:33.049557  212092 default_sa.go:34] waiting for default service account to be created ...
	I0115 11:39:33.052338  212092 default_sa.go:45] found service account: "default"
	I0115 11:39:33.052364  212092 default_sa.go:55] duration metric: took 2.79959ms for default service account to be created ...
	I0115 11:39:33.052374  212092 system_pods.go:116] waiting for k8s-apps to be running ...
	I0115 11:39:33.061629  212092 system_pods.go:86] 18 kube-system pods found
	I0115 11:39:33.061653  212092 system_pods.go:89] "coredns-5dd5756b68-f2m28" [ff26f94b-9996-4aab-9ff9-036a14c67030] Running
	I0115 11:39:33.061660  212092 system_pods.go:89] "csi-hostpath-attacher-0" [db1ef2f8-8de5-4151-84d3-6a64fb2b27e4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0115 11:39:33.061666  212092 system_pods.go:89] "csi-hostpath-resizer-0" [04332f7b-72ff-42e9-88ba-9cc8c9912dd2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0115 11:39:33.061674  212092 system_pods.go:89] "csi-hostpathplugin-s9bm2" [21961aed-af30-464b-847a-ca4132a1289d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0115 11:39:33.061678  212092 system_pods.go:89] "etcd-addons-431563" [425fe5e1-d227-4370-9965-e30798627fe3] Running
	I0115 11:39:33.061683  212092 system_pods.go:89] "kube-apiserver-addons-431563" [9306c11d-5158-4e26-8e74-ffc7f8a2b0f0] Running
	I0115 11:39:33.061687  212092 system_pods.go:89] "kube-controller-manager-addons-431563" [6f4e3b7e-7936-4303-a534-da9fcc1cbb34] Running
	I0115 11:39:33.061692  212092 system_pods.go:89] "kube-ingress-dns-minikube" [4a8ba5ff-b961-4f79-96c9-f78bf26c31f3] Running
	I0115 11:39:33.061696  212092 system_pods.go:89] "kube-proxy-fwnrh" [2d348fb7-53a9-494c-966a-85d80e683244] Running
	I0115 11:39:33.061699  212092 system_pods.go:89] "kube-scheduler-addons-431563" [8a5bceb2-62c6-4720-8967-b4571a20a71c] Running
	I0115 11:39:33.061703  212092 system_pods.go:89] "metrics-server-7c66d45ddc-jx62w" [24b2c90f-46cf-4c50-aabe-d7bba12f8c79] Running
	I0115 11:39:33.061708  212092 system_pods.go:89] "nvidia-device-plugin-daemonset-x5nt7" [a6842468-f8ad-4cc8-8fc7-6ab79386a78a] Running
	I0115 11:39:33.061712  212092 system_pods.go:89] "registry-lb2gb" [dcaf122f-5ed9-41c0-b3af-9ba6f721c0b5] Running
	I0115 11:39:33.061716  212092 system_pods.go:89] "registry-proxy-wwnrn" [593585c9-6917-42ab-9269-d5dfcddaed1b] Running
	I0115 11:39:33.061720  212092 system_pods.go:89] "snapshot-controller-58dbcc7b99-4fsk7" [ed7d6674-3322-4268-aca8-69848fcd6465] Running
	I0115 11:39:33.061729  212092 system_pods.go:89] "snapshot-controller-58dbcc7b99-sr4b5" [8d8c447a-e504-4ae5-a754-dab2535a2544] Running
	I0115 11:39:33.061733  212092 system_pods.go:89] "storage-provisioner" [98a811db-3aa2-47ae-a494-96a738f37083] Running
	I0115 11:39:33.061737  212092 system_pods.go:89] "tiller-deploy-7b677967b9-cmkcx" [3f460238-8c55-491b-a343-20d0169c76f8] Running
	I0115 11:39:33.061745  212092 system_pods.go:126] duration metric: took 9.365466ms to wait for k8s-apps to be running ...
	I0115 11:39:33.061753  212092 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 11:39:33.061798  212092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 11:39:33.085273  212092 system_svc.go:56] duration metric: took 23.508805ms WaitForService to wait for kubelet.
	I0115 11:39:33.085299  212092 kubeadm.go:581] duration metric: took 52.569530489s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0115 11:39:33.085319  212092 node_conditions.go:102] verifying NodePressure condition ...
	I0115 11:39:33.088424  212092 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0115 11:39:33.088451  212092 node_conditions.go:123] node cpu capacity is 2
	I0115 11:39:33.088463  212092 node_conditions.go:105] duration metric: took 3.139225ms to run NodePressure ...
	I0115 11:39:33.088476  212092 start.go:228] waiting for startup goroutines ...
	I0115 11:39:33.326145  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:33.481424  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:33.491995  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:33.827571  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:33.982234  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:33.991868  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:34.325166  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:34.480781  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:34.490967  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:34.824075  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:34.982055  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:34.991848  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:35.327543  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:35.760752  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:35.766174  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:35.826809  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:35.980513  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:35.992865  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:36.325144  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:36.481995  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:36.491375  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:36.825309  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:36.980605  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:36.991147  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:37.324888  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:37.480716  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:37.490754  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:37.824241  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:37.981749  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:37.991927  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:38.325266  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:38.481509  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:38.492292  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:38.833998  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:38.981828  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:38.992438  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:39.326436  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:39.627203  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:39.627230  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:39.861436  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:39.998909  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:39.999157  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:40.325870  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:40.481447  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:40.491987  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:40.829743  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:40.984728  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:40.991263  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:41.324853  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:41.480340  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:41.492302  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:41.825162  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:41.980893  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:41.991862  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:42.325194  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:42.484179  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:42.491039  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:42.824492  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:42.980622  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:42.991622  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:43.324783  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:43.480314  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:43.491264  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:43.825985  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:43.980469  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:43.992035  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:44.332769  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:44.480960  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:44.496459  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:44.826551  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:44.984219  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:44.991471  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:45.325080  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:45.481419  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:45.491462  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:45.824107  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:45.980507  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:45.991615  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:46.326351  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:46.480199  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:46.491293  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:46.824359  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:46.980058  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:46.992626  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:47.324563  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:47.480465  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:47.491408  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:47.847252  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:47.980263  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:47.999393  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:48.324692  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:48.480170  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:48.491923  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:48.825155  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:48.981081  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:48.991146  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:49.328341  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:49.480724  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:49.491603  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:49.827646  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:49.980380  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:49.991085  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:50.324741  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:50.481249  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 11:39:50.491448  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:50.825157  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:50.984282  212092 kapi.go:107] duration metric: took 56.010251196s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0115 11:39:50.992033  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:51.324659  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:51.492890  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:51.824147  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:51.992955  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:52.324838  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:52.492302  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:52.824435  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:52.992610  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:53.325701  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:53.491989  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:53.824411  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:53.992711  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:54.324360  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:54.492131  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:54.824913  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:54.992364  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:55.327264  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:55.491717  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:55.825640  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:55.995968  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:56.324672  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:56.497856  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:56.825208  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:56.991892  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:57.325290  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:57.491869  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:57.826025  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:57.992450  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:58.325080  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:58.495175  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:58.824346  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:58.991871  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:59.324458  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:59.492263  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:39:59.825027  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:39:59.991342  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:40:00.325192  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:40:00.492931  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:40:00.825877  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:40:00.992007  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:40:01.327291  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:40:01.494978  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:40:01.826260  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:40:01.991530  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:40:02.326078  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:40:02.495005  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:40:02.826871  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:40:03.004624  212092 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 11:40:03.326376  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:40:03.493536  212092 kapi.go:107] duration metric: took 1m11.506789397s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0115 11:40:03.825160  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:40:04.325572  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:40:04.824635  212092 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 11:40:05.336054  212092 kapi.go:107] duration metric: took 1m8.515711489s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0115 11:40:05.338198  212092 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-431563 cluster.
	I0115 11:40:05.339767  212092 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0115 11:40:05.341496  212092 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0115 11:40:05.343176  212092 out.go:177] * Enabled addons: cloud-spanner, helm-tiller, storage-provisioner, yakd, metrics-server, nvidia-device-plugin, ingress-dns, inspektor-gadget, default-storageclass, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0115 11:40:05.344380  212092 addons.go:505] enable addons completed in 1m25.334092499s: enabled=[cloud-spanner helm-tiller storage-provisioner yakd metrics-server nvidia-device-plugin ingress-dns inspektor-gadget default-storageclass storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0115 11:40:05.344429  212092 start.go:233] waiting for cluster config update ...
	I0115 11:40:05.344453  212092 start.go:242] writing updated cluster config ...
	I0115 11:40:05.344776  212092 ssh_runner.go:195] Run: rm -f paused
	I0115 11:40:05.411335  212092 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0115 11:40:05.526627  212092 out.go:177] * Done! kubectl is now configured to use "addons-431563" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	f1b898b199f0a       98f6c3b32d565       3 seconds ago        Exited              helm-test                                0                   e0c49bd2630d5       helm-test
	52064a2d295b7       3cb09943f099d       19 seconds ago       Running             headlamp                                 0                   2e0d9c710b602       headlamp-7ddfbb94ff-kpns7
	92debfc2895df       6d2a98b274382       40 seconds ago       Running             gcp-auth                                 0                   7e5580d198bfc       gcp-auth-d4c87556c-tpc4x
	2b6670f148eab       311f90a3747fd       42 seconds ago       Running             controller                               0                   8476975da5250       ingress-nginx-controller-69cff4fd79-xwftj
	129f51a4bc1c4       738351fd438f0       54 seconds ago       Running             csi-snapshotter                          0                   f3a40d2a73cbc       csi-hostpathplugin-s9bm2
	849fac676ad12       931dbfd16f87c       56 seconds ago       Running             csi-provisioner                          0                   f3a40d2a73cbc       csi-hostpathplugin-s9bm2
	c1fe175bb7545       e899260153aed       57 seconds ago       Running             liveness-probe                           0                   f3a40d2a73cbc       csi-hostpathplugin-s9bm2
	3b61d9e62fc55       e255e073c508c       58 seconds ago       Running             hostpath                                 0                   f3a40d2a73cbc       csi-hostpathplugin-s9bm2
	0f0d3dbfec5d7       88ef14a257f42       59 seconds ago       Running             node-driver-registrar                    0                   f3a40d2a73cbc       csi-hostpathplugin-s9bm2
	2b4fb8338de86       a1ed5895ba635       About a minute ago   Running             csi-external-health-monitor-controller   0                   f3a40d2a73cbc       csi-hostpathplugin-s9bm2
	729ebb2175ddd       19a639eda60f0       About a minute ago   Running             csi-resizer                              0                   0584fb55a71e1       csi-hostpath-resizer-0
	5062addde3c31       1ebff0f9671bc       About a minute ago   Exited              patch                                    0                   31028c456e38b       ingress-nginx-admission-patch-5b9xf
	e9f8bc8b33303       59cbb42146a37       About a minute ago   Running             csi-attacher                             0                   2b56c50488763       csi-hostpath-attacher-0
	d32c5712b051e       1ebff0f9671bc       About a minute ago   Exited              create                                   0                   13675fc9b094e       ingress-nginx-admission-create-fs9bw
	9bae54c9a94c8       e16d1e3a10667       About a minute ago   Running             local-path-provisioner                   0                   1a915b8247998       local-path-provisioner-78b46b4d5c-vjtpp
	7e42af79a3672       31de47c733c91       About a minute ago   Running             yakd                                     0                   6a9a66acc0d2f       yakd-dashboard-9947fc6bf-vdggf
	4671e5f9ad618       aa61ee9c70bc4       About a minute ago   Running             volume-snapshot-controller               0                   e1d5c96e3978e       snapshot-controller-58dbcc7b99-4fsk7
	b0dba84ef9eea       aa61ee9c70bc4       About a minute ago   Running             volume-snapshot-controller               0                   8cd1f63d3ca81       snapshot-controller-58dbcc7b99-sr4b5
	aae5b527406e3       3f39089e90831       About a minute ago   Running             tiller                                   0                   6af6e1048ff61       tiller-deploy-7b677967b9-cmkcx
	4fb4c1511e1a6       1499ed4fbd0aa       About a minute ago   Running             minikube-ingress-dns                     0                   6931f3927b6f1       kube-ingress-dns-minikube
	9751d9aa40631       6e38f40d628db       About a minute ago   Running             storage-provisioner                      0                   1f90a04c0a401       storage-provisioner
	8531fe16028b2       83f6cc407eed8       2 minutes ago        Running             kube-proxy                               0                   a321be2edc81d       kube-proxy-fwnrh
	76392270d1891       ead0a4a53df89       2 minutes ago        Running             coredns                                  0                   7bbbb42587e7a       coredns-5dd5756b68-f2m28
	f91a4522dff1d       73deb9a3f7025       2 minutes ago        Running             etcd                                     0                   b3d512d501711       etcd-addons-431563
	4c6454d232563       d058aa5ab969c       2 minutes ago        Running             kube-controller-manager                  0                   3b86cc9a3e361       kube-controller-manager-addons-431563
	50a7c9e4a2673       e3db313c6dbc0       2 minutes ago        Running             kube-scheduler                           0                   91092037565b6       kube-scheduler-addons-431563
	6e7811bd3d2d5       7fe0e6f37db33       2 minutes ago        Running             kube-apiserver                           0                   27b6874e96f25       kube-apiserver-addons-431563
	
	
	==> containerd <==
	-- Journal begins at Mon 2024-01-15 11:37:53 UTC, ends at Mon 2024-01-15 11:40:45 UTC. --
	Jan 15 11:40:42 addons-431563 containerd[689]: time="2024-01-15T11:40:42.440279089Z" level=info msg="shim disconnected" id=3740939ec060e1f541d9881b9784b139c3c769e2987158708cf60bc955f7bf04 namespace=k8s.io
	Jan 15 11:40:42 addons-431563 containerd[689]: time="2024-01-15T11:40:42.440618655Z" level=warning msg="cleaning up after shim disconnected" id=3740939ec060e1f541d9881b9784b139c3c769e2987158708cf60bc955f7bf04 namespace=k8s.io
	Jan 15 11:40:42 addons-431563 containerd[689]: time="2024-01-15T11:40:42.440669337Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 15 11:40:42 addons-431563 containerd[689]: time="2024-01-15T11:40:42.495321469Z" level=info msg="TearDown network for sandbox \"3740939ec060e1f541d9881b9784b139c3c769e2987158708cf60bc955f7bf04\" successfully"
	Jan 15 11:40:42 addons-431563 containerd[689]: time="2024-01-15T11:40:42.495448573Z" level=info msg="StopPodSandbox for \"3740939ec060e1f541d9881b9784b139c3c769e2987158708cf60bc955f7bf04\" returns successfully"
	Jan 15 11:40:43 addons-431563 containerd[689]: time="2024-01-15T11:40:43.109043815Z" level=info msg="Finish port forwarding for \"6af6e1048ff6188e02342d5ac2221b9b4644b301ac5dbde62cf4f18349d08e4e\" port 44134"
	Jan 15 11:40:43 addons-431563 containerd[689]: time="2024-01-15T11:40:43.420762936Z" level=info msg="StopPodSandbox for \"e0c49bd2630d55c60cd8963585e8ec6a66ed36730715a0c1d93c911972aa592f\""
	Jan 15 11:40:43 addons-431563 containerd[689]: time="2024-01-15T11:40:43.421095784Z" level=info msg="Container to stop \"f1b898b199f0a8127511547436d568ed979639bcb89a6f533cfd9de40bc0f5c4\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jan 15 11:40:43 addons-431563 containerd[689]: time="2024-01-15T11:40:43.425722856Z" level=info msg="RemoveContainer for \"17be1bbc934451b080d2a7464440c5629727519f3da0c9fb1293ad02a1a827f0\""
	Jan 15 11:40:43 addons-431563 containerd[689]: time="2024-01-15T11:40:43.450090670Z" level=info msg="RemoveContainer for \"17be1bbc934451b080d2a7464440c5629727519f3da0c9fb1293ad02a1a827f0\" returns successfully"
	Jan 15 11:40:43 addons-431563 containerd[689]: time="2024-01-15T11:40:43.596196323Z" level=info msg="shim disconnected" id=e0c49bd2630d55c60cd8963585e8ec6a66ed36730715a0c1d93c911972aa592f namespace=k8s.io
	Jan 15 11:40:43 addons-431563 containerd[689]: time="2024-01-15T11:40:43.596268776Z" level=warning msg="cleaning up after shim disconnected" id=e0c49bd2630d55c60cd8963585e8ec6a66ed36730715a0c1d93c911972aa592f namespace=k8s.io
	Jan 15 11:40:43 addons-431563 containerd[689]: time="2024-01-15T11:40:43.596280752Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 15 11:40:43 addons-431563 containerd[689]: time="2024-01-15T11:40:43.628368655Z" level=warning msg="cleanup warnings time=\"2024-01-15T11:40:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
	Jan 15 11:40:43 addons-431563 containerd[689]: time="2024-01-15T11:40:43.679734463Z" level=info msg="TearDown network for sandbox \"e0c49bd2630d55c60cd8963585e8ec6a66ed36730715a0c1d93c911972aa592f\" successfully"
	Jan 15 11:40:43 addons-431563 containerd[689]: time="2024-01-15T11:40:43.679841874Z" level=info msg="StopPodSandbox for \"e0c49bd2630d55c60cd8963585e8ec6a66ed36730715a0c1d93c911972aa592f\" returns successfully"
	Jan 15 11:40:44 addons-431563 containerd[689]: time="2024-01-15T11:40:44.069718051Z" level=info msg="StopPodSandbox for \"bce3afcbc66d8ff24f5112b9093f9a82d575207ae9cecfe83968b8fe3d2300a9\""
	Jan 15 11:40:44 addons-431563 containerd[689]: time="2024-01-15T11:40:44.069835209Z" level=info msg="Container to stop \"9199927ced2a7c4f5991f5c749a94f198c83ba04977c017e5b2e32dbaa94d93f\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jan 15 11:40:44 addons-431563 containerd[689]: time="2024-01-15T11:40:44.250534360Z" level=info msg="shim disconnected" id=bce3afcbc66d8ff24f5112b9093f9a82d575207ae9cecfe83968b8fe3d2300a9 namespace=k8s.io
	Jan 15 11:40:44 addons-431563 containerd[689]: time="2024-01-15T11:40:44.250759129Z" level=warning msg="cleaning up after shim disconnected" id=bce3afcbc66d8ff24f5112b9093f9a82d575207ae9cecfe83968b8fe3d2300a9 namespace=k8s.io
	Jan 15 11:40:44 addons-431563 containerd[689]: time="2024-01-15T11:40:44.251166246Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 15 11:40:44 addons-431563 containerd[689]: time="2024-01-15T11:40:44.287572281Z" level=info msg="TearDown network for sandbox \"bce3afcbc66d8ff24f5112b9093f9a82d575207ae9cecfe83968b8fe3d2300a9\" successfully"
	Jan 15 11:40:44 addons-431563 containerd[689]: time="2024-01-15T11:40:44.287783290Z" level=info msg="StopPodSandbox for \"bce3afcbc66d8ff24f5112b9093f9a82d575207ae9cecfe83968b8fe3d2300a9\" returns successfully"
	Jan 15 11:40:44 addons-431563 containerd[689]: time="2024-01-15T11:40:44.429229457Z" level=info msg="RemoveContainer for \"9199927ced2a7c4f5991f5c749a94f198c83ba04977c017e5b2e32dbaa94d93f\""
	Jan 15 11:40:44 addons-431563 containerd[689]: time="2024-01-15T11:40:44.440542866Z" level=info msg="RemoveContainer for \"9199927ced2a7c4f5991f5c749a94f198c83ba04977c017e5b2e32dbaa94d93f\" returns successfully"
	
	
	==> coredns [76392270d1891588436d8277f1521f0469415bc0f3e70dd37777fd4e6387bda7] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:50451 - 1185 "HINFO IN 5144150467624783883.8731048848579496768. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020144754s
	[INFO] 10.244.0.21:40402 - 3205 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000307703s
	[INFO] 10.244.0.21:60185 - 61061 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000088059s
	[INFO] 10.244.0.21:44283 - 5005 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000175977s
	[INFO] 10.244.0.21:58832 - 60436 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00007828s
	[INFO] 10.244.0.21:53186 - 4205 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000099207s
	[INFO] 10.244.0.21:38362 - 62751 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000110978s
	[INFO] 10.244.0.21:37696 - 61130 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000729559s
	[INFO] 10.244.0.21:37287 - 8969 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 382 0.001036358s
	[INFO] 10.244.0.25:55423 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000833945s
	[INFO] 10.244.0.25:38496 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000160085s
	
	
	==> describe nodes <==
	Name:               addons-431563
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-431563
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=71cf7d00913f789829bf5813c1d11b9a83eda53e
	                    minikube.k8s.io/name=addons-431563
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_15T11_38_27_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-431563
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-431563"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Jan 2024 11:38:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-431563
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Jan 2024 11:40:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Jan 2024 11:40:29 +0000   Mon, 15 Jan 2024 11:38:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Jan 2024 11:40:29 +0000   Mon, 15 Jan 2024 11:38:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Jan 2024 11:40:29 +0000   Mon, 15 Jan 2024 11:38:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Jan 2024 11:40:29 +0000   Mon, 15 Jan 2024 11:38:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.212
	  Hostname:    addons-431563
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 e1970b8766914a099ec5ab41ea083ab1
	  System UUID:                e1970b87-6691-4a09-9ec5-ab41ea083ab1
	  Boot ID:                    dcb6d3b0-039c-4aa7-a995-1d934d5f7899
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.11
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (19 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-d4c87556c-tpc4x                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  headlamp                    headlamp-7ddfbb94ff-kpns7                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  ingress-nginx               ingress-nginx-controller-69cff4fd79-xwftj    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         114s
	  kube-system                 coredns-5dd5756b68-f2m28                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m5s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         111s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         111s
	  kube-system                 csi-hostpathplugin-s9bm2                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         111s
	  kube-system                 etcd-addons-431563                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m20s
	  kube-system                 kube-apiserver-addons-431563                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m18s
	  kube-system                 kube-controller-manager-addons-431563        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m20s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	  kube-system                 kube-proxy-fwnrh                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m6s
	  kube-system                 kube-scheduler-addons-431563                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m18s
	  kube-system                 snapshot-controller-58dbcc7b99-4fsk7         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	  kube-system                 snapshot-controller-58dbcc7b99-sr4b5         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	  kube-system                 tiller-deploy-7b677967b9-cmkcx               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	  local-path-storage          local-path-provisioner-78b46b4d5c-vjtpp      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         115s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-vdggf               0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     116s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             388Mi (10%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m2s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  2m27s (x8 over 2m27s)  kubelet          Node addons-431563 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m27s (x8 over 2m27s)  kubelet          Node addons-431563 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m27s (x7 over 2m27s)  kubelet          Node addons-431563 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m19s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m18s                  kubelet          Node addons-431563 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m18s                  kubelet          Node addons-431563 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m18s                  kubelet          Node addons-431563 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m18s                  kubelet          Node addons-431563 status is now: NodeReady
	  Normal  RegisteredNode           2m6s                   node-controller  Node addons-431563 event: Registered Node addons-431563 in Controller
	
	
	==> dmesg <==
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.042641] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jan15 11:38] systemd-fstab-generator[557]: Ignoring "noauto" for root device
	[  +0.106872] systemd-fstab-generator[568]: Ignoring "noauto" for root device
	[  +0.143977] systemd-fstab-generator[581]: Ignoring "noauto" for root device
	[  +0.097721] systemd-fstab-generator[592]: Ignoring "noauto" for root device
	[  +0.245898] systemd-fstab-generator[619]: Ignoring "noauto" for root device
	[  +6.022522] systemd-fstab-generator[679]: Ignoring "noauto" for root device
	[  +4.677233] systemd-fstab-generator[842]: Ignoring "noauto" for root device
	[  +8.741211] systemd-fstab-generator[1196]: Ignoring "noauto" for root device
	[ +19.028781] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.055804] kauditd_printk_skb: 45 callbacks suppressed
	[  +5.001585] kauditd_printk_skb: 28 callbacks suppressed
	[Jan15 11:39] kauditd_printk_skb: 16 callbacks suppressed
	[ +30.218222] kauditd_printk_skb: 22 callbacks suppressed
	[ +11.695536] kauditd_printk_skb: 34 callbacks suppressed
	[Jan15 11:40] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.366532] kauditd_printk_skb: 25 callbacks suppressed
	[  +8.768200] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.153629] kauditd_printk_skb: 23 callbacks suppressed
	[  +6.014953] kauditd_printk_skb: 16 callbacks suppressed
	[  +8.632216] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.533360] kauditd_printk_skb: 27 callbacks suppressed
	
	
	==> etcd [f91a4522dff1db809549f25be555e02295dd128b66dc88798c9e7595f4a0815f] <==
	{"level":"info","ts":"2024-01-15T11:39:29.709491Z","caller":"traceutil/trace.go:171","msg":"trace[530398235] transaction","detail":"{read_only:false; response_revision:982; number_of_response:1; }","duration":"348.316033ms","start":"2024-01-15T11:39:29.361165Z","end":"2024-01-15T11:39:29.709481Z","steps":["trace[530398235] 'process raft request'  (duration: 347.948376ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T11:39:29.709635Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-15T11:39:29.361146Z","time spent":"348.404353ms","remote":"127.0.0.1:49250","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:974 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-01-15T11:39:29.709975Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"239.03494ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:81787"}
	{"level":"info","ts":"2024-01-15T11:39:29.710034Z","caller":"traceutil/trace.go:171","msg":"trace[100841265] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:982; }","duration":"239.097114ms","start":"2024-01-15T11:39:29.470925Z","end":"2024-01-15T11:39:29.710022Z","steps":["trace[100841265] 'agreement among raft nodes before linearized reading'  (duration: 238.836083ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T11:39:29.711281Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.23879ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13488"}
	{"level":"info","ts":"2024-01-15T11:39:29.711341Z","caller":"traceutil/trace.go:171","msg":"trace[1218429713] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:982; }","duration":"227.302711ms","start":"2024-01-15T11:39:29.48403Z","end":"2024-01-15T11:39:29.711333Z","steps":["trace[1218429713] 'agreement among raft nodes before linearized reading'  (duration: 227.1852ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T11:39:29.711678Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"219.852331ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-7c66d45ddc-jx62w\" ","response":"range_response_count:1 size:4016"}
	{"level":"info","ts":"2024-01-15T11:39:29.711738Z","caller":"traceutil/trace.go:171","msg":"trace[443885149] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-7c66d45ddc-jx62w; range_end:; response_count:1; response_revision:982; }","duration":"219.911774ms","start":"2024-01-15T11:39:29.491817Z","end":"2024-01-15T11:39:29.711729Z","steps":["trace[443885149] 'agreement among raft nodes before linearized reading'  (duration: 219.837626ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T11:39:35.747106Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-15T11:39:35.373155Z","time spent":"373.948607ms","remote":"127.0.0.1:49216","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2024-01-15T11:39:35.74732Z","caller":"traceutil/trace.go:171","msg":"trace[420893578] linearizableReadLoop","detail":"{readStateIndex:1036; appliedIndex:1036; }","duration":"275.938763ms","start":"2024-01-15T11:39:35.47135Z","end":"2024-01-15T11:39:35.747289Z","steps":["trace[420893578] 'read index received'  (duration: 275.930885ms)","trace[420893578] 'applied index is now lower than readState.Index'  (duration: 6.209µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-15T11:39:35.747595Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"276.248056ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:81845"}
	{"level":"info","ts":"2024-01-15T11:39:35.747628Z","caller":"traceutil/trace.go:171","msg":"trace[118957113] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1005; }","duration":"276.296017ms","start":"2024-01-15T11:39:35.471325Z","end":"2024-01-15T11:39:35.747621Z","steps":["trace[118957113] 'agreement among raft nodes before linearized reading'  (duration: 276.05844ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T11:39:35.757081Z","caller":"traceutil/trace.go:171","msg":"trace[1823199842] transaction","detail":"{read_only:false; response_revision:1006; number_of_response:1; }","duration":"113.836558ms","start":"2024-01-15T11:39:35.643229Z","end":"2024-01-15T11:39:35.757066Z","steps":["trace[1823199842] 'process raft request'  (duration: 109.26658ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T11:39:35.757564Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"273.499853ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13488"}
	{"level":"info","ts":"2024-01-15T11:39:35.757594Z","caller":"traceutil/trace.go:171","msg":"trace[1694141537] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1006; }","duration":"273.544867ms","start":"2024-01-15T11:39:35.484042Z","end":"2024-01-15T11:39:35.757587Z","steps":["trace[1694141537] 'agreement among raft nodes before linearized reading'  (duration: 273.462032ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T11:39:39.617652Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.455098ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:81845"}
	{"level":"info","ts":"2024-01-15T11:39:39.61777Z","caller":"traceutil/trace.go:171","msg":"trace[984254385] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1029; }","duration":"146.590893ms","start":"2024-01-15T11:39:39.471168Z","end":"2024-01-15T11:39:39.617759Z","steps":["trace[984254385] 'range keys from in-memory index tree'  (duration: 146.214175ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T11:39:39.617652Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.043531ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13488"}
	{"level":"info","ts":"2024-01-15T11:39:39.618098Z","caller":"traceutil/trace.go:171","msg":"trace[1422082262] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1029; }","duration":"133.515849ms","start":"2024-01-15T11:39:39.484574Z","end":"2024-01-15T11:39:39.61809Z","steps":["trace[1422082262] 'range keys from in-memory index tree'  (duration: 132.9214ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T11:39:39.853374Z","caller":"traceutil/trace.go:171","msg":"trace[1953642013] transaction","detail":"{read_only:false; response_revision:1030; number_of_response:1; }","duration":"124.895752ms","start":"2024-01-15T11:39:39.728465Z","end":"2024-01-15T11:39:39.85336Z","steps":["trace[1953642013] 'process raft request'  (duration: 124.163876ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T11:40:05.726239Z","caller":"traceutil/trace.go:171","msg":"trace[924079433] transaction","detail":"{read_only:false; response_revision:1191; number_of_response:1; }","duration":"104.492172ms","start":"2024-01-15T11:40:05.62172Z","end":"2024-01-15T11:40:05.726212Z","steps":["trace[924079433] 'process raft request'  (duration: 102.780834ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T11:40:05.729231Z","caller":"traceutil/trace.go:171","msg":"trace[1474636090] transaction","detail":"{read_only:false; response_revision:1192; number_of_response:1; }","duration":"106.949827ms","start":"2024-01-15T11:40:05.622268Z","end":"2024-01-15T11:40:05.729218Z","steps":["trace[1474636090] 'process raft request'  (duration: 103.532304ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T11:40:30.304331Z","caller":"traceutil/trace.go:171","msg":"trace[1122577116] transaction","detail":"{read_only:false; response_revision:1443; number_of_response:1; }","duration":"130.10751ms","start":"2024-01-15T11:40:30.174198Z","end":"2024-01-15T11:40:30.304305Z","steps":["trace[1122577116] 'process raft request'  (duration: 129.795822ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T11:40:34.893311Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.846913ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:16 size:73841"}
	{"level":"info","ts":"2024-01-15T11:40:34.89342Z","caller":"traceutil/trace.go:171","msg":"trace[177374242] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:16; response_revision:1458; }","duration":"161.987962ms","start":"2024-01-15T11:40:34.731418Z","end":"2024-01-15T11:40:34.893406Z","steps":["trace[177374242] 'range keys from in-memory index tree'  (duration: 161.515511ms)"],"step_count":1}
	
	
	==> gcp-auth [92debfc2895df50c33464244b3390a8daa1d1219919cac0713d1f35e596f87cb] <==
	2024/01/15 11:40:04 GCP Auth Webhook started!
	2024/01/15 11:40:05 Ready to marshal response ...
	2024/01/15 11:40:05 Ready to write response ...
	2024/01/15 11:40:06 Ready to marshal response ...
	2024/01/15 11:40:06 Ready to write response ...
	2024/01/15 11:40:16 Ready to marshal response ...
	2024/01/15 11:40:16 Ready to write response ...
	2024/01/15 11:40:16 Ready to marshal response ...
	2024/01/15 11:40:16 Ready to write response ...
	2024/01/15 11:40:19 Ready to marshal response ...
	2024/01/15 11:40:19 Ready to write response ...
	2024/01/15 11:40:19 Ready to marshal response ...
	2024/01/15 11:40:19 Ready to write response ...
	2024/01/15 11:40:19 Ready to marshal response ...
	2024/01/15 11:40:19 Ready to write response ...
	2024/01/15 11:40:28 Ready to marshal response ...
	2024/01/15 11:40:28 Ready to write response ...
	2024/01/15 11:40:34 Ready to marshal response ...
	2024/01/15 11:40:34 Ready to write response ...
	2024/01/15 11:40:40 Ready to marshal response ...
	2024/01/15 11:40:40 Ready to write response ...
	
	
	==> kernel <==
	 11:40:45 up 3 min,  0 users,  load average: 2.50, 1.61, 0.65
	Linux addons-431563 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [6e7811bd3d2d5dc76419629877a3959acd3134772bc896da92b702b595edef2b] <==
	W0115 11:38:52.750764       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0115 11:38:54.572000       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.98.180.168"}
	I0115 11:38:54.594332       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I0115 11:38:54.879522       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.110.210.212"}
	W0115 11:38:55.835655       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0115 11:38:56.588120       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.103.38.78"}
	I0115 11:39:23.660084       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0115 11:39:32.755525       1 handler_proxy.go:93] no RequestInfo found in the context
	E0115 11:39:32.755998       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0115 11:39:32.756240       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0115 11:39:32.758762       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.9.6:443/apis/metrics.k8s.io/v1beta1: Get "https://10.97.9.6:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.97.9.6:443: connect: connection refused
	E0115 11:39:32.759692       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.9.6:443/apis/metrics.k8s.io/v1beta1: Get "https://10.97.9.6:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.97.9.6:443: connect: connection refused
	E0115 11:39:32.764246       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.9.6:443/apis/metrics.k8s.io/v1beta1: Get "https://10.97.9.6:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.97.9.6:443: connect: connection refused
	I0115 11:39:32.874308       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0115 11:40:19.623273       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.70.156"}
	I0115 11:40:23.661351       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0115 11:40:33.557607       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0115 11:40:33.769587       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0115 11:40:41.146285       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0115 11:40:44.008189       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0115 11:40:44.032544       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0115 11:40:45.081572       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	
	==> kube-controller-manager [4c6454d2325632e3cb46d8745000040d576fea6b6e01548cb601533a3fc29daa] <==
	I0115 11:40:09.271915       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0115 11:40:15.995049       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="21.832477ms"
	I0115 11:40:15.996047       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="64.899µs"
	I0115 11:40:17.023560       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0115 11:40:17.026643       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0115 11:40:17.083219       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0115 11:40:17.083919       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0115 11:40:17.581393       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-78b46b4d5c" duration="6.569µs"
	I0115 11:40:17.801980       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/cloud-spanner-emulator-64c8c85f65" duration="13.529µs"
	I0115 11:40:19.656275       1 event.go:307] "Event occurred" object="headlamp/headlamp" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set headlamp-7ddfbb94ff to 1"
	I0115 11:40:19.752106       1 event.go:307] "Event occurred" object="headlamp/headlamp-7ddfbb94ff" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: headlamp-7ddfbb94ff-kpns7"
	I0115 11:40:19.765374       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-7ddfbb94ff" duration="107.114177ms"
	I0115 11:40:19.775985       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-7ddfbb94ff" duration="10.571638ms"
	I0115 11:40:19.805980       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-7ddfbb94ff" duration="29.952167ms"
	I0115 11:40:19.806083       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-7ddfbb94ff" duration="72.978µs"
	I0115 11:40:21.995927       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="8.904µs"
	I0115 11:40:24.273036       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0115 11:40:26.296430       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-7ddfbb94ff" duration="75.069µs"
	I0115 11:40:26.323487       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-7ddfbb94ff" duration="7.916285ms"
	I0115 11:40:26.324255       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-7ddfbb94ff" duration="675.786µs"
	I0115 11:40:27.044827       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0115 11:40:28.857136       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-7c66d45ddc" duration="5.565µs"
	I0115 11:40:43.781808       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0115 11:40:43.782386       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	E0115 11:40:45.083636       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [8531fe16028b270246f5fb00bf09e9fade6a8d660b7e95853af005632267fafa] <==
	I0115 11:38:42.581253       1 server_others.go:69] "Using iptables proxy"
	I0115 11:38:42.689054       1 node.go:141] Successfully retrieved node IP: 192.168.39.212
	I0115 11:38:42.807455       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0115 11:38:42.807553       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0115 11:38:42.811456       1 server_others.go:152] "Using iptables Proxier"
	I0115 11:38:42.811491       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0115 11:38:42.811831       1 server.go:846] "Version info" version="v1.28.4"
	I0115 11:38:42.812107       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0115 11:38:42.812735       1 config.go:188] "Starting service config controller"
	I0115 11:38:42.812749       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0115 11:38:42.812766       1 config.go:97] "Starting endpoint slice config controller"
	I0115 11:38:42.812769       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0115 11:38:42.816416       1 config.go:315] "Starting node config controller"
	I0115 11:38:42.816423       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0115 11:38:42.912989       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0115 11:38:42.913028       1 shared_informer.go:318] Caches are synced for service config
	I0115 11:38:42.922781       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [50a7c9e4a2673b35ac0a0e7817090730fad5cedab8f958d64029a25c07ef3c4e] <==
	E0115 11:38:23.800110       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0115 11:38:23.800333       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0115 11:38:24.765812       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0115 11:38:24.766223       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0115 11:38:24.766651       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0115 11:38:24.767068       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0115 11:38:24.772631       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0115 11:38:24.772836       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0115 11:38:24.846074       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0115 11:38:24.846338       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0115 11:38:24.875061       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0115 11:38:24.875378       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0115 11:38:24.953395       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0115 11:38:24.953745       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0115 11:38:24.966173       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0115 11:38:24.966702       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0115 11:38:24.969365       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0115 11:38:24.969588       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0115 11:38:24.992452       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0115 11:38:24.992574       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0115 11:38:25.047734       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0115 11:38:25.047930       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0115 11:38:25.077080       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0115 11:38:25.077198       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0115 11:38:26.585745       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-15 11:37:53 UTC, ends at Mon 2024-01-15 11:40:45 UTC. --
	Jan 15 11:40:44 addons-431563 kubelet[1203]: I0115 11:40:44.399651    1203 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1ee82911-09c2-4b18-a0db-296d9a9fc97c-host\") pod \"1ee82911-09c2-4b18-a0db-296d9a9fc97c\" (UID: \"1ee82911-09c2-4b18-a0db-296d9a9fc97c\") "
	Jan 15 11:40:44 addons-431563 kubelet[1203]: I0115 11:40:44.399766    1203 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"modules\" (UniqueName: \"kubernetes.io/host-path/1ee82911-09c2-4b18-a0db-296d9a9fc97c-modules\") pod \"1ee82911-09c2-4b18-a0db-296d9a9fc97c\" (UID: \"1ee82911-09c2-4b18-a0db-296d9a9fc97c\") "
	Jan 15 11:40:44 addons-431563 kubelet[1203]: I0115 11:40:44.399831    1203 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/1ee82911-09c2-4b18-a0db-296d9a9fc97c-debugfs\") pod \"1ee82911-09c2-4b18-a0db-296d9a9fc97c\" (UID: \"1ee82911-09c2-4b18-a0db-296d9a9fc97c\") "
	Jan 15 11:40:44 addons-431563 kubelet[1203]: I0115 11:40:44.399990    1203 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1ee82911-09c2-4b18-a0db-296d9a9fc97c-run\") pod \"1ee82911-09c2-4b18-a0db-296d9a9fc97c\" (UID: \"1ee82911-09c2-4b18-a0db-296d9a9fc97c\") "
	Jan 15 11:40:44 addons-431563 kubelet[1203]: I0115 11:40:44.400062    1203 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/1ee82911-09c2-4b18-a0db-296d9a9fc97c-bpffs\") pod \"1ee82911-09c2-4b18-a0db-296d9a9fc97c\" (UID: \"1ee82911-09c2-4b18-a0db-296d9a9fc97c\") "
	Jan 15 11:40:44 addons-431563 kubelet[1203]: I0115 11:40:44.400122    1203 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgg9x\" (UniqueName: \"kubernetes.io/projected/1ee82911-09c2-4b18-a0db-296d9a9fc97c-kube-api-access-wgg9x\") pod \"1ee82911-09c2-4b18-a0db-296d9a9fc97c\" (UID: \"1ee82911-09c2-4b18-a0db-296d9a9fc97c\") "
	Jan 15 11:40:44 addons-431563 kubelet[1203]: I0115 11:40:44.400172    1203 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cgroup\" (UniqueName: \"kubernetes.io/host-path/1ee82911-09c2-4b18-a0db-296d9a9fc97c-cgroup\") pod \"1ee82911-09c2-4b18-a0db-296d9a9fc97c\" (UID: \"1ee82911-09c2-4b18-a0db-296d9a9fc97c\") "
	Jan 15 11:40:44 addons-431563 kubelet[1203]: I0115 11:40:44.400390    1203 reconciler_common.go:300] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1ee82911-09c2-4b18-a0db-296d9a9fc97c-host\") on node \"addons-431563\" DevicePath \"\""
	Jan 15 11:40:44 addons-431563 kubelet[1203]: I0115 11:40:44.400474    1203 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ee82911-09c2-4b18-a0db-296d9a9fc97c-cgroup" (OuterVolumeSpecName: "cgroup") pod "1ee82911-09c2-4b18-a0db-296d9a9fc97c" (UID: "1ee82911-09c2-4b18-a0db-296d9a9fc97c"). InnerVolumeSpecName "cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jan 15 11:40:44 addons-431563 kubelet[1203]: I0115 11:40:44.400528    1203 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ee82911-09c2-4b18-a0db-296d9a9fc97c-modules" (OuterVolumeSpecName: "modules") pod "1ee82911-09c2-4b18-a0db-296d9a9fc97c" (UID: "1ee82911-09c2-4b18-a0db-296d9a9fc97c"). InnerVolumeSpecName "modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jan 15 11:40:44 addons-431563 kubelet[1203]: I0115 11:40:44.400573    1203 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ee82911-09c2-4b18-a0db-296d9a9fc97c-debugfs" (OuterVolumeSpecName: "debugfs") pod "1ee82911-09c2-4b18-a0db-296d9a9fc97c" (UID: "1ee82911-09c2-4b18-a0db-296d9a9fc97c"). InnerVolumeSpecName "debugfs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jan 15 11:40:44 addons-431563 kubelet[1203]: I0115 11:40:44.400619    1203 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ee82911-09c2-4b18-a0db-296d9a9fc97c-run" (OuterVolumeSpecName: "run") pod "1ee82911-09c2-4b18-a0db-296d9a9fc97c" (UID: "1ee82911-09c2-4b18-a0db-296d9a9fc97c"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jan 15 11:40:44 addons-431563 kubelet[1203]: I0115 11:40:44.400667    1203 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ee82911-09c2-4b18-a0db-296d9a9fc97c-bpffs" (OuterVolumeSpecName: "bpffs") pod "1ee82911-09c2-4b18-a0db-296d9a9fc97c" (UID: "1ee82911-09c2-4b18-a0db-296d9a9fc97c"). InnerVolumeSpecName "bpffs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jan 15 11:40:44 addons-431563 kubelet[1203]: I0115 11:40:44.407754    1203 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ee82911-09c2-4b18-a0db-296d9a9fc97c-kube-api-access-wgg9x" (OuterVolumeSpecName: "kube-api-access-wgg9x") pod "1ee82911-09c2-4b18-a0db-296d9a9fc97c" (UID: "1ee82911-09c2-4b18-a0db-296d9a9fc97c"). InnerVolumeSpecName "kube-api-access-wgg9x". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 15 11:40:44 addons-431563 kubelet[1203]: I0115 11:40:44.427217    1203 scope.go:117] "RemoveContainer" containerID="9199927ced2a7c4f5991f5c749a94f198c83ba04977c017e5b2e32dbaa94d93f"
	Jan 15 11:40:44 addons-431563 kubelet[1203]: I0115 11:40:44.433562    1203 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0c49bd2630d55c60cd8963585e8ec6a66ed36730715a0c1d93c911972aa592f"
	Jan 15 11:40:44 addons-431563 kubelet[1203]: I0115 11:40:44.501239    1203 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wgg9x\" (UniqueName: \"kubernetes.io/projected/1ee82911-09c2-4b18-a0db-296d9a9fc97c-kube-api-access-wgg9x\") on node \"addons-431563\" DevicePath \"\""
	Jan 15 11:40:44 addons-431563 kubelet[1203]: I0115 11:40:44.501453    1203 reconciler_common.go:300] "Volume detached for volume \"cgroup\" (UniqueName: \"kubernetes.io/host-path/1ee82911-09c2-4b18-a0db-296d9a9fc97c-cgroup\") on node \"addons-431563\" DevicePath \"\""
	Jan 15 11:40:44 addons-431563 kubelet[1203]: I0115 11:40:44.501500    1203 reconciler_common.go:300] "Volume detached for volume \"modules\" (UniqueName: \"kubernetes.io/host-path/1ee82911-09c2-4b18-a0db-296d9a9fc97c-modules\") on node \"addons-431563\" DevicePath \"\""
	Jan 15 11:40:44 addons-431563 kubelet[1203]: I0115 11:40:44.501538    1203 reconciler_common.go:300] "Volume detached for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/1ee82911-09c2-4b18-a0db-296d9a9fc97c-debugfs\") on node \"addons-431563\" DevicePath \"\""
	Jan 15 11:40:44 addons-431563 kubelet[1203]: I0115 11:40:44.501575    1203 reconciler_common.go:300] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1ee82911-09c2-4b18-a0db-296d9a9fc97c-run\") on node \"addons-431563\" DevicePath \"\""
	Jan 15 11:40:44 addons-431563 kubelet[1203]: I0115 11:40:44.501621    1203 reconciler_common.go:300] "Volume detached for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/1ee82911-09c2-4b18-a0db-296d9a9fc97c-bpffs\") on node \"addons-431563\" DevicePath \"\""
	Jan 15 11:40:45 addons-431563 kubelet[1203]: I0115 11:40:45.068449    1203 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1ee82911-09c2-4b18-a0db-296d9a9fc97c" path="/var/lib/kubelet/pods/1ee82911-09c2-4b18-a0db-296d9a9fc97c/volumes"
	Jan 15 11:40:45 addons-431563 kubelet[1203]: I0115 11:40:45.068966    1203 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b65c8b45-8835-4cfd-a25e-8fab7da955fa" path="/var/lib/kubelet/pods/b65c8b45-8835-4cfd-a25e-8fab7da955fa/volumes"
	Jan 15 11:40:45 addons-431563 kubelet[1203]: I0115 11:40:45.069387    1203 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="be35316b-6a89-44b9-86a4-d09ffe1f83fd" path="/var/lib/kubelet/pods/be35316b-6a89-44b9-86a4-d09ffe1f83fd/volumes"
	
	
	==> storage-provisioner [9751d9aa40631ce1bf4c3f6867937398bb65fbcdec7bd91ec4a14761b18ea566] <==
	I0115 11:38:54.676040       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0115 11:38:54.706707       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0115 11:38:54.706801       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0115 11:38:54.750703       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0115 11:38:54.751781       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"672c5352-bd66-4476-904d-a25148fcb185", APIVersion:"v1", ResourceVersion:"797", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-431563_77e48f8f-7a40-4ba6-acab-7098d56496fa became leader
	I0115 11:38:54.751810       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-431563_77e48f8f-7a40-4ba6-acab-7098d56496fa!
	I0115 11:38:54.852359       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-431563_77e48f8f-7a40-4ba6-acab-7098d56496fa!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-431563 -n addons-431563
helpers_test.go:261: (dbg) Run:  kubectl --context addons-431563 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-fs9bw ingress-nginx-admission-patch-5b9xf
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/HelmTiller]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-431563 describe pod ingress-nginx-admission-create-fs9bw ingress-nginx-admission-patch-5b9xf
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-431563 describe pod ingress-nginx-admission-create-fs9bw ingress-nginx-admission-patch-5b9xf: exit status 1 (63.07877ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-fs9bw" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-5b9xf" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-431563 describe pod ingress-nginx-admission-create-fs9bw ingress-nginx-admission-patch-5b9xf: exit status 1
--- FAIL: TestAddons/parallel/HelmTiller (17.41s)

                                                
                                    

Test pass (278/318)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 12.21
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
9 TestDownloadOnly/v1.16.0/DeleteAll 0.15
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.28.4/json-events 5.01
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.07
18 TestDownloadOnly/v1.28.4/DeleteAll 0.14
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.29.0-rc.2/json-events 5.1
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.15
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.58
31 TestOffline 67.49
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 145.01
38 TestAddons/parallel/Registry 16.55
39 TestAddons/parallel/Ingress 22.24
40 TestAddons/parallel/InspektorGadget 12.03
41 TestAddons/parallel/MetricsServer 6.86
44 TestAddons/parallel/CSI 63.58
45 TestAddons/parallel/Headlamp 13.8
46 TestAddons/parallel/CloudSpanner 6.29
47 TestAddons/parallel/LocalPath 55.02
48 TestAddons/parallel/NvidiaDevicePlugin 5.54
49 TestAddons/parallel/Yakd 6.01
52 TestAddons/serial/GCPAuth/Namespaces 0.13
53 TestAddons/StoppedEnableDisable 92.3
54 TestCertOptions 63.47
55 TestCertExpiration 249.21
57 TestForceSystemdFlag 72.32
58 TestForceSystemdEnv 71.51
60 TestKVMDriverInstallOrUpdate 4.31
64 TestErrorSpam/setup 47.75
65 TestErrorSpam/start 0.41
66 TestErrorSpam/status 0.81
67 TestErrorSpam/pause 1.57
68 TestErrorSpam/unpause 1.72
69 TestErrorSpam/stop 2.28
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 65.1
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 5.81
76 TestFunctional/serial/KubeContext 0.05
77 TestFunctional/serial/KubectlGetPods 0.08
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.6
81 TestFunctional/serial/CacheCmd/cache/add_local 2.1
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.94
86 TestFunctional/serial/CacheCmd/cache/delete 0.13
87 TestFunctional/serial/MinikubeKubectlCmd 0.13
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
89 TestFunctional/serial/ExtraConfig 44.33
90 TestFunctional/serial/ComponentHealth 0.08
91 TestFunctional/serial/LogsCmd 1.53
92 TestFunctional/serial/LogsFileCmd 1.56
93 TestFunctional/serial/InvalidService 3.93
95 TestFunctional/parallel/ConfigCmd 0.47
96 TestFunctional/parallel/DashboardCmd 15.82
97 TestFunctional/parallel/DryRun 0.37
98 TestFunctional/parallel/InternationalLanguage 0.17
99 TestFunctional/parallel/StatusCmd 1.15
103 TestFunctional/parallel/ServiceCmdConnect 8.55
104 TestFunctional/parallel/AddonsCmd 0.15
105 TestFunctional/parallel/PersistentVolumeClaim 40.07
107 TestFunctional/parallel/SSHCmd 0.44
108 TestFunctional/parallel/CpCmd 1.58
109 TestFunctional/parallel/MySQL 29.45
110 TestFunctional/parallel/FileSync 0.23
111 TestFunctional/parallel/CertSync 1.5
115 TestFunctional/parallel/NodeLabels 0.07
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.5
119 TestFunctional/parallel/License 0.26
120 TestFunctional/parallel/ServiceCmd/DeployApp 11.24
121 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
122 TestFunctional/parallel/MountCmd/any-port 8.97
123 TestFunctional/parallel/ProfileCmd/profile_list 0.38
124 TestFunctional/parallel/ProfileCmd/profile_json_output 0.32
125 TestFunctional/parallel/MountCmd/specific-port 1.77
126 TestFunctional/parallel/MountCmd/VerifyCleanup 1.6
127 TestFunctional/parallel/ServiceCmd/List 0.57
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.54
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
130 TestFunctional/parallel/ServiceCmd/Format 0.37
131 TestFunctional/parallel/ServiceCmd/URL 0.31
141 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
142 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
143 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
144 TestFunctional/parallel/Version/short 0.06
145 TestFunctional/parallel/Version/components 0.64
146 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
147 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
148 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
149 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
150 TestFunctional/parallel/ImageCommands/ImageBuild 4.07
151 TestFunctional/parallel/ImageCommands/Setup 1.23
152 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.09
153 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.43
154 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.47
155 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.54
156 TestFunctional/parallel/ImageCommands/ImageRemove 0.63
157 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.9
158 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.53
159 TestFunctional/delete_addon-resizer_images 0.07
160 TestFunctional/delete_my-image_image 0.02
161 TestFunctional/delete_minikube_cached_images 0.02
165 TestIngressAddonLegacy/StartLegacyK8sCluster 105.96
167 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.53
168 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.67
169 TestIngressAddonLegacy/serial/ValidateIngressAddons 30.64
172 TestJSONOutput/start/Command 66.78
173 TestJSONOutput/start/Audit 0
175 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/pause/Command 0.7
179 TestJSONOutput/pause/Audit 0
181 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/unpause/Command 0.66
185 TestJSONOutput/unpause/Audit 0
187 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/stop/Command 7.12
191 TestJSONOutput/stop/Audit 0
193 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
195 TestErrorJSONOutput 0.24
200 TestMainNoArgs 0.06
201 TestMinikubeProfile 100.7
204 TestMountStart/serial/StartWithMountFirst 28.46
205 TestMountStart/serial/VerifyMountFirst 0.42
206 TestMountStart/serial/StartWithMountSecond 29.85
207 TestMountStart/serial/VerifyMountSecond 0.43
208 TestMountStart/serial/DeleteFirst 0.71
209 TestMountStart/serial/VerifyMountPostDelete 0.43
210 TestMountStart/serial/Stop 1.44
211 TestMountStart/serial/RestartStopped 26.97
212 TestMountStart/serial/VerifyMountPostStop 0.41
215 TestMultiNode/serial/FreshStart2Nodes 112.38
216 TestMultiNode/serial/DeployApp2Nodes 5.01
217 TestMultiNode/serial/PingHostFrom2Pods 0.96
218 TestMultiNode/serial/AddNode 45.63
219 TestMultiNode/serial/MultiNodeLabels 0.06
220 TestMultiNode/serial/ProfileList 0.22
221 TestMultiNode/serial/CopyFile 7.93
222 TestMultiNode/serial/StopNode 2.28
223 TestMultiNode/serial/StartAfterStop 27.62
224 TestMultiNode/serial/RestartKeepsNodes 311.46
225 TestMultiNode/serial/DeleteNode 1.81
226 TestMultiNode/serial/StopMultiNode 183.2
227 TestMultiNode/serial/RestartMultiNode 94.63
228 TestMultiNode/serial/ValidateNameConflict 49.83
233 TestPreload 277.45
235 TestScheduledStopUnix 118.67
239 TestRunningBinaryUpgrade 213.43
241 TestKubernetesUpgrade 133.74
245 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
253 TestStartStop/group/old-k8s-version/serial/FirstStart 163.06
254 TestNoKubernetes/serial/StartWithK8s 102.52
262 TestNetworkPlugins/group/false 3.53
266 TestNoKubernetes/serial/StartWithStopK8s 48.35
267 TestNoKubernetes/serial/Start 28.08
268 TestStartStop/group/old-k8s-version/serial/DeployApp 10.44
269 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.89
270 TestStartStop/group/old-k8s-version/serial/Stop 91.92
271 TestNoKubernetes/serial/VerifyK8sNotRunning 0.23
272 TestNoKubernetes/serial/ProfileList 71.11
273 TestNoKubernetes/serial/Stop 1.25
274 TestNoKubernetes/serial/StartNoArgs 39.1
275 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.29
276 TestStartStop/group/old-k8s-version/serial/SecondStart 139.7
278 TestStartStop/group/no-preload/serial/FirstStart 137.95
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
281 TestStartStop/group/embed-certs/serial/FirstStart 145.49
282 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 12.01
283 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
284 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
285 TestStartStop/group/old-k8s-version/serial/Pause 2.67
287 TestPause/serial/Start 104.17
288 TestStartStop/group/no-preload/serial/DeployApp 8.35
289 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.28
290 TestStartStop/group/embed-certs/serial/DeployApp 9.37
291 TestStartStop/group/no-preload/serial/Stop 91.82
292 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.27
293 TestStartStop/group/embed-certs/serial/Stop 91.9
295 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 76.22
296 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
297 TestStartStop/group/no-preload/serial/SecondStart 312.69
298 TestPause/serial/SecondStartNoReconfiguration 26.45
299 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.35
300 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
301 TestStartStop/group/embed-certs/serial/SecondStart 592.95
302 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.07
303 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.82
304 TestPause/serial/Pause 0.67
305 TestPause/serial/VerifyStatus 0.28
306 TestPause/serial/Unpause 0.64
307 TestPause/serial/PauseAgain 0.76
308 TestPause/serial/DeletePaused 1.06
309 TestPause/serial/VerifyDeletedResources 34.91
311 TestStartStop/group/newest-cni/serial/FirstStart 61.53
312 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
313 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 335.46
314 TestStartStop/group/newest-cni/serial/DeployApp 0
315 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.38
316 TestStartStop/group/newest-cni/serial/Stop 2.12
317 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
318 TestStartStop/group/newest-cni/serial/SecondStart 48.67
319 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
320 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
321 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
322 TestStartStop/group/newest-cni/serial/Pause 2.65
323 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
324 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
325 TestStoppedBinaryUpgrade/Setup 0.38
326 TestStoppedBinaryUpgrade/Upgrade 119.75
327 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
328 TestStartStop/group/no-preload/serial/Pause 3.03
329 TestNetworkPlugins/group/auto/Start 89.13
330 TestNetworkPlugins/group/auto/KubeletFlags 0.24
331 TestNetworkPlugins/group/auto/NetCatPod 9.26
332 TestNetworkPlugins/group/auto/DNS 0.18
333 TestNetworkPlugins/group/auto/Localhost 0.17
334 TestNetworkPlugins/group/auto/HairPin 0.15
335 TestStoppedBinaryUpgrade/MinikubeLogs 1.01
336 TestNetworkPlugins/group/flannel/Start 83.03
337 TestNetworkPlugins/group/enable-default-cni/Start 87.32
338 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
339 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
340 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
341 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.66
342 TestNetworkPlugins/group/bridge/Start 122.61
343 TestNetworkPlugins/group/flannel/ControllerPod 6.01
344 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
345 TestNetworkPlugins/group/flannel/NetCatPod 10.28
346 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
347 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.33
348 TestNetworkPlugins/group/flannel/DNS 0.17
349 TestNetworkPlugins/group/flannel/Localhost 0.15
350 TestNetworkPlugins/group/flannel/HairPin 0.14
351 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
352 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
353 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
354 TestNetworkPlugins/group/calico/Start 97.75
355 TestNetworkPlugins/group/kindnet/Start 98.43
356 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
357 TestNetworkPlugins/group/bridge/NetCatPod 9.35
358 TestNetworkPlugins/group/bridge/DNS 0.19
359 TestNetworkPlugins/group/bridge/Localhost 0.17
360 TestNetworkPlugins/group/bridge/HairPin 0.19
361 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
362 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
363 TestNetworkPlugins/group/custom-flannel/Start 90.79
364 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
365 TestStartStop/group/embed-certs/serial/Pause 3.46
366 TestNetworkPlugins/group/calico/ControllerPod 6.01
367 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
368 TestNetworkPlugins/group/calico/KubeletFlags 0.24
369 TestNetworkPlugins/group/calico/NetCatPod 11.24
370 TestNetworkPlugins/group/kindnet/KubeletFlags 0.27
371 TestNetworkPlugins/group/kindnet/NetCatPod 9.23
372 TestNetworkPlugins/group/calico/DNS 0.2
373 TestNetworkPlugins/group/calico/Localhost 0.17
374 TestNetworkPlugins/group/calico/HairPin 0.15
375 TestNetworkPlugins/group/kindnet/DNS 0.2
376 TestNetworkPlugins/group/kindnet/Localhost 0.16
377 TestNetworkPlugins/group/kindnet/HairPin 0.17
378 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
379 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.24
380 TestNetworkPlugins/group/custom-flannel/DNS 0.16
381 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
382 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
x
+
TestDownloadOnly/v1.16.0/json-events (12.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-339684 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-339684 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (12.208460816s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (12.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-339684
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-339684: exit status 85 (73.082779ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-339684 | jenkins | v1.32.0 | 15 Jan 24 11:37 UTC |          |
	|         | -p download-only-339684        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 11:37:16
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 11:37:16.226167  211382 out.go:296] Setting OutFile to fd 1 ...
	I0115 11:37:16.226413  211382 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:37:16.226422  211382 out.go:309] Setting ErrFile to fd 2...
	I0115 11:37:16.226427  211382 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:37:16.226628  211382 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-203994/.minikube/bin
	W0115 11:37:16.226751  211382 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17957-203994/.minikube/config/config.json: open /home/jenkins/minikube-integration/17957-203994/.minikube/config/config.json: no such file or directory
	I0115 11:37:16.227360  211382 out.go:303] Setting JSON to true
	I0115 11:37:16.228260  211382 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":15588,"bootTime":1705303048,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 11:37:16.228326  211382 start.go:138] virtualization: kvm guest
	I0115 11:37:16.230850  211382 out.go:97] [download-only-339684] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	W0115 11:37:16.231001  211382 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17957-203994/.minikube/cache/preloaded-tarball: no such file or directory
	I0115 11:37:16.231070  211382 notify.go:220] Checking for updates...
	I0115 11:37:16.232386  211382 out.go:169] MINIKUBE_LOCATION=17957
	I0115 11:37:16.233820  211382 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 11:37:16.235416  211382 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17957-203994/kubeconfig
	I0115 11:37:16.236864  211382 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-203994/.minikube
	I0115 11:37:16.238144  211382 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0115 11:37:16.240699  211382 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0115 11:37:16.241035  211382 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 11:37:16.275817  211382 out.go:97] Using the kvm2 driver based on user configuration
	I0115 11:37:16.275843  211382 start.go:298] selected driver: kvm2
	I0115 11:37:16.275849  211382 start.go:902] validating driver "kvm2" against <nil>
	I0115 11:37:16.276171  211382 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 11:37:16.276267  211382 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17957-203994/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0115 11:37:16.291157  211382 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0115 11:37:16.291225  211382 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0115 11:37:16.291892  211382 start_flags.go:392] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0115 11:37:16.292106  211382 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0115 11:37:16.292178  211382 cni.go:84] Creating CNI manager for ""
	I0115 11:37:16.292195  211382 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0115 11:37:16.292211  211382 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0115 11:37:16.292225  211382 start_flags.go:321] config:
	{Name:download-only-339684 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-339684 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 11:37:16.292515  211382 iso.go:125] acquiring lock: {Name:mk7bc47681a8ce0f0bd494ddfd59b43adf8a6e55 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 11:37:16.294297  211382 out.go:97] Downloading VM boot image ...
	I0115 11:37:16.294331  211382 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17957-203994/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0115 11:37:19.158023  211382 out.go:97] Starting control plane node download-only-339684 in cluster download-only-339684
	I0115 11:37:19.158043  211382 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0115 11:37:19.178335  211382 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0115 11:37:19.178371  211382 cache.go:56] Caching tarball of preloaded images
	I0115 11:37:19.178560  211382 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0115 11:37:19.180713  211382 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0115 11:37:19.180736  211382 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0115 11:37:19.209224  211382 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/17957-203994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0115 11:37:25.764102  211382 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0115 11:37:25.764194  211382 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17957-203994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-339684"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-339684
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (5.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-004731 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-004731 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (5.007410839s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (5.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-004731
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-004731: exit status 85 (74.066347ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-339684 | jenkins | v1.32.0 | 15 Jan 24 11:37 UTC |                     |
	|         | -p download-only-339684        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 15 Jan 24 11:37 UTC | 15 Jan 24 11:37 UTC |
	| delete  | -p download-only-339684        | download-only-339684 | jenkins | v1.32.0 | 15 Jan 24 11:37 UTC | 15 Jan 24 11:37 UTC |
	| start   | -o=json --download-only        | download-only-004731 | jenkins | v1.32.0 | 15 Jan 24 11:37 UTC |                     |
	|         | -p download-only-004731        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 11:37:28
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 11:37:28.793113  211551 out.go:296] Setting OutFile to fd 1 ...
	I0115 11:37:28.793231  211551 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:37:28.793243  211551 out.go:309] Setting ErrFile to fd 2...
	I0115 11:37:28.793247  211551 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:37:28.793462  211551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-203994/.minikube/bin
	I0115 11:37:28.794114  211551 out.go:303] Setting JSON to true
	I0115 11:37:28.795061  211551 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":15601,"bootTime":1705303048,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 11:37:28.795133  211551 start.go:138] virtualization: kvm guest
	I0115 11:37:28.797739  211551 out.go:97] [download-only-004731] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 11:37:28.799523  211551 out.go:169] MINIKUBE_LOCATION=17957
	I0115 11:37:28.797967  211551 notify.go:220] Checking for updates...
	I0115 11:37:28.802624  211551 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 11:37:28.804137  211551 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17957-203994/kubeconfig
	I0115 11:37:28.805488  211551 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-203994/.minikube
	I0115 11:37:28.806693  211551 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-004731"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-004731
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (5.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-990313 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-990313 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (5.097084402s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (5.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-990313
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-990313: exit status 85 (78.163196ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-339684 | jenkins | v1.32.0 | 15 Jan 24 11:37 UTC |                     |
	|         | -p download-only-339684           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 15 Jan 24 11:37 UTC | 15 Jan 24 11:37 UTC |
	| delete  | -p download-only-339684           | download-only-339684 | jenkins | v1.32.0 | 15 Jan 24 11:37 UTC | 15 Jan 24 11:37 UTC |
	| start   | -o=json --download-only           | download-only-004731 | jenkins | v1.32.0 | 15 Jan 24 11:37 UTC |                     |
	|         | -p download-only-004731           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 15 Jan 24 11:37 UTC | 15 Jan 24 11:37 UTC |
	| delete  | -p download-only-004731           | download-only-004731 | jenkins | v1.32.0 | 15 Jan 24 11:37 UTC | 15 Jan 24 11:37 UTC |
	| start   | -o=json --download-only           | download-only-990313 | jenkins | v1.32.0 | 15 Jan 24 11:37 UTC |                     |
	|         | -p download-only-990313           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 11:37:34
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 11:37:34.153288  211705 out.go:296] Setting OutFile to fd 1 ...
	I0115 11:37:34.153413  211705 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:37:34.153422  211705 out.go:309] Setting ErrFile to fd 2...
	I0115 11:37:34.153427  211705 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:37:34.153645  211705 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-203994/.minikube/bin
	I0115 11:37:34.154240  211705 out.go:303] Setting JSON to true
	I0115 11:37:34.155092  211705 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":15606,"bootTime":1705303048,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 11:37:34.155156  211705 start.go:138] virtualization: kvm guest
	I0115 11:37:34.157585  211705 out.go:97] [download-only-990313] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 11:37:34.159089  211705 out.go:169] MINIKUBE_LOCATION=17957
	I0115 11:37:34.157750  211705 notify.go:220] Checking for updates...
	I0115 11:37:34.161900  211705 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 11:37:34.163228  211705 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17957-203994/kubeconfig
	I0115 11:37:34.164517  211705 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-203994/.minikube
	I0115 11:37:34.166078  211705 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0115 11:37:34.168890  211705 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0115 11:37:34.169152  211705 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 11:37:34.200518  211705 out.go:97] Using the kvm2 driver based on user configuration
	I0115 11:37:34.200545  211705 start.go:298] selected driver: kvm2
	I0115 11:37:34.200552  211705 start.go:902] validating driver "kvm2" against <nil>
	I0115 11:37:34.200885  211705 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 11:37:34.200972  211705 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17957-203994/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0115 11:37:34.215213  211705 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0115 11:37:34.215267  211705 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0115 11:37:34.215803  211705 start_flags.go:392] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0115 11:37:34.215952  211705 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0115 11:37:34.216020  211705 cni.go:84] Creating CNI manager for ""
	I0115 11:37:34.216032  211705 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0115 11:37:34.216044  211705 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0115 11:37:34.216052  211705 start_flags.go:321] config:
	{Name:download-only-990313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-990313 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 11:37:34.216194  211705 iso.go:125] acquiring lock: {Name:mk7bc47681a8ce0f0bd494ddfd59b43adf8a6e55 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 11:37:34.217965  211705 out.go:97] Starting control plane node download-only-990313 in cluster download-only-990313
	I0115 11:37:34.217978  211705 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0115 11:37:34.233272  211705 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I0115 11:37:34.233295  211705 cache.go:56] Caching tarball of preloaded images
	I0115 11:37:34.233435  211705 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0115 11:37:34.235273  211705 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0115 11:37:34.235287  211705 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0115 11:37:34.257936  211705 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:e143dbc3b8285cd3241a841ac2b6b7fc -> /home/jenkins/minikube-integration/17957-203994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I0115 11:37:36.954396  211705 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0115 11:37:36.954530  211705 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17957-203994/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0115 11:37:37.760646  211705 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on containerd
	I0115 11:37:37.761011  211705 profile.go:148] Saving config to /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/download-only-990313/config.json ...
	I0115 11:37:37.761044  211705 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/download-only-990313/config.json: {Name:mk57c9acc1df16ad321bf4c6862121796492705f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:37:37.761212  211705 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0115 11:37:37.761344  211705 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17957-203994/.minikube/cache/linux/amd64/v1.29.0-rc.2/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-990313"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-990313
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-871944 --alsologtostderr --binary-mirror http://127.0.0.1:42239 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-871944" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-871944
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (67.49s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-936251 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-936251 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m6.411564403s)
helpers_test.go:175: Cleaning up "offline-containerd-936251" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-936251
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-936251: (1.07782876s)
--- PASS: TestOffline (67.49s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-431563
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-431563: exit status 85 (62.705947ms)

                                                
                                                
-- stdout --
	* Profile "addons-431563" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-431563"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-431563
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-431563: exit status 85 (63.844875ms)

                                                
                                                
-- stdout --
	* Profile "addons-431563" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-431563"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (145.01s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-431563 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-431563 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m25.008984427s)
--- PASS: TestAddons/Setup (145.01s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 106.214167ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-lb2gb" [dcaf122f-5ed9-41c0-b3af-9ba6f721c0b5] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00497863s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-wwnrn" [593585c9-6917-42ab-9269-d5dfcddaed1b] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.009824703s
addons_test.go:340: (dbg) Run:  kubectl --context addons-431563 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-431563 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-431563 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.582801915s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-431563 ip
2024/01/15 11:40:21 [DEBUG] GET http://192.168.39.212:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-431563 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.55s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (22.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-431563 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-431563 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-431563 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [43228f7b-56eb-4a7b-b117-fef9e621c076] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [43228f7b-56eb-4a7b-b117-fef9e621c076] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.006043479s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-431563 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-431563 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-431563 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.212
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-431563 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-431563 addons disable ingress-dns --alsologtostderr -v=1: (1.959232483s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-431563 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-431563 addons disable ingress --alsologtostderr -v=1: (8.003314433s)
--- PASS: TestAddons/parallel/Ingress (22.24s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.03s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-tzpgz" [1ee82911-09c2-4b18-a0db-296d9a9fc97c] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005301607s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-431563
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-431563: (6.027762499s)
--- PASS: TestAddons/parallel/InspektorGadget (12.03s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.86s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 3.838675ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-jx62w" [24b2c90f-46cf-4c50-aabe-d7bba12f8c79] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.006877253s
addons_test.go:415: (dbg) Run:  kubectl --context addons-431563 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-431563 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.86s)

                                                
                                    
x
+
TestAddons/parallel/CSI (63.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 111.387617ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-431563 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431563 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431563 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431563 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431563 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431563 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431563 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431563 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431563 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431563 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431563 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431563 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431563 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431563 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431563 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431563 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431563 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431563 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431563 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431563 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431563 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431563 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431563 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431563 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-431563 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [be35316b-6a89-44b9-86a4-d09ffe1f83fd] Pending
helpers_test.go:344: "task-pv-pod" [be35316b-6a89-44b9-86a4-d09ffe1f83fd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [be35316b-6a89-44b9-86a4-d09ffe1f83fd] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.011439397s
addons_test.go:584: (dbg) Run:  kubectl --context addons-431563 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-431563 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-431563 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-431563 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-431563 delete pod task-pv-pod: (1.243965365s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-431563 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-431563 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431563 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431563 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431563 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431563 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431563 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431563 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431563 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431563 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431563 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431563 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-431563 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [8f3682ea-ca23-42fd-aca0-b29c41c8b9b1] Pending
helpers_test.go:344: "task-pv-pod-restore" [8f3682ea-ca23-42fd-aca0-b29c41c8b9b1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [8f3682ea-ca23-42fd-aca0-b29c41c8b9b1] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005662534s
addons_test.go:626: (dbg) Run:  kubectl --context addons-431563 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-431563 delete pod task-pv-pod-restore: (1.004696889s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-431563 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-431563 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-431563 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-431563 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.097633078s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-431563 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (63.58s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-431563 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-431563 --alsologtostderr -v=1: (1.796819033s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-kpns7" [32c99362-c092-4742-8c79-a2f0d9689044] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-kpns7" [32c99362-c092-4742-8c79-a2f0d9689044] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-kpns7" [32c99362-c092-4742-8c79-a2f0d9689044] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.005410516s
--- PASS: TestAddons/parallel/Headlamp (13.80s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.29s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-q78zv" [d4c6a6f0-025a-4632-a975-9ddaff123002] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004152536s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-431563
addons_test.go:860: (dbg) Done: out/minikube-linux-amd64 addons disable cloud-spanner -p addons-431563: (1.285051239s)
--- PASS: TestAddons/parallel/CloudSpanner (6.29s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.02s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-431563 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-431563 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431563 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431563 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431563 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431563 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431563 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-431563 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [57492026-49e3-4217-acbe-671bcf3460d8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [57492026-49e3-4217-acbe-671bcf3460d8] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [57492026-49e3-4217-acbe-671bcf3460d8] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.006699362s
addons_test.go:891: (dbg) Run:  kubectl --context addons-431563 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-431563 ssh "cat /opt/local-path-provisioner/pvc-d540bbe1-6ee4-4fa1-a0cc-5bdcb43e6364_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-431563 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-431563 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-431563 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-431563 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (44.067321909s)
--- PASS: TestAddons/parallel/LocalPath (55.02s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.54s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-x5nt7" [a6842468-f8ad-4cc8-8fc7-6ab79386a78a] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0083898s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-431563
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.54s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-vdggf" [2f1161bc-9a5c-4c3a-96be-32f1604c6a99] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003911575s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-431563 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-431563 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.3s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-431563
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-431563: (1m31.963860634s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-431563
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-431563
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-431563
--- PASS: TestAddons/StoppedEnableDisable (92.30s)

                                                
                                    
x
+
TestCertOptions (63.47s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-229937 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-229937 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m1.875642365s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-229937 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-229937 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-229937 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-229937" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-229937
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-229937: (1.05598898s)
--- PASS: TestCertOptions (63.47s)

                                                
                                    
x
+
TestCertExpiration (249.21s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-438039 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
E0115 12:18:08.680861  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-438039 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (50.015507415s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-438039 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-438039 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (17.507811018s)
helpers_test.go:175: Cleaning up "cert-expiration-438039" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-438039
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-438039: (1.681186911s)
--- PASS: TestCertExpiration (249.21s)

                                                
                                    
x
+
TestForceSystemdFlag (72.32s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-931849 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-931849 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m11.058965215s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-931849 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-931849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-931849
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-931849: (1.03728116s)
--- PASS: TestForceSystemdFlag (72.32s)

                                                
                                    
x
+
TestForceSystemdEnv (71.51s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-683283 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E0115 12:16:05.039976  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/functional-997529/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-683283 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m8.544008158s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-683283 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-683283" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-683283
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-683283: (2.714354891s)
--- PASS: TestForceSystemdEnv (71.51s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.31s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.31s)

                                                
                                    
x
+
TestErrorSpam/setup (47.75s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-749663 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-749663 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-749663 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-749663 --driver=kvm2  --container-runtime=containerd: (47.748144993s)
--- PASS: TestErrorSpam/setup (47.75s)

                                                
                                    
x
+
TestErrorSpam/start (0.41s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-749663 --log_dir /tmp/nospam-749663 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-749663 --log_dir /tmp/nospam-749663 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-749663 --log_dir /tmp/nospam-749663 start --dry-run
--- PASS: TestErrorSpam/start (0.41s)

                                                
                                    
x
+
TestErrorSpam/status (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-749663 --log_dir /tmp/nospam-749663 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-749663 --log_dir /tmp/nospam-749663 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-749663 --log_dir /tmp/nospam-749663 status
--- PASS: TestErrorSpam/status (0.81s)

                                                
                                    
x
+
TestErrorSpam/pause (1.57s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-749663 --log_dir /tmp/nospam-749663 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-749663 --log_dir /tmp/nospam-749663 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-749663 --log_dir /tmp/nospam-749663 pause
--- PASS: TestErrorSpam/pause (1.57s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-749663 --log_dir /tmp/nospam-749663 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-749663 --log_dir /tmp/nospam-749663 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-749663 --log_dir /tmp/nospam-749663 unpause
--- PASS: TestErrorSpam/unpause (1.72s)

                                                
                                    
x
+
TestErrorSpam/stop (2.28s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-749663 --log_dir /tmp/nospam-749663 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-749663 --log_dir /tmp/nospam-749663 stop: (2.105585369s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-749663 --log_dir /tmp/nospam-749663 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-749663 --log_dir /tmp/nospam-749663 stop
--- PASS: TestErrorSpam/stop (2.28s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17957-203994/.minikube/files/etc/test/nested/copy/211370/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (65.1s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-amd64 start -p functional-997529 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
functional_test.go:2233: (dbg) Done: out/minikube-linux-amd64 start -p functional-997529 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m5.102510046s)
--- PASS: TestFunctional/serial/StartWithProxy (65.10s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.81s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-997529 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-997529 --alsologtostderr -v=8: (5.810819907s)
functional_test.go:659: soft start took 5.811601194s for "functional-997529" cluster.
--- PASS: TestFunctional/serial/SoftStart (5.81s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-997529 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 cache add registry.k8s.io/pause:3.1
E0115 11:45:05.633365  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/client.crt: no such file or directory
E0115 11:45:05.639287  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/client.crt: no such file or directory
E0115 11:45:05.649680  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/client.crt: no such file or directory
E0115 11:45:05.670007  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/client.crt: no such file or directory
E0115 11:45:05.710414  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/client.crt: no such file or directory
E0115 11:45:05.790862  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/client.crt: no such file or directory
E0115 11:45:05.951354  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/client.crt: no such file or directory
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-997529 cache add registry.k8s.io/pause:3.1: (1.143953377s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 cache add registry.k8s.io/pause:3.3
E0115 11:45:06.272518  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/client.crt: no such file or directory
E0115 11:45:06.913577  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/client.crt: no such file or directory
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-997529 cache add registry.k8s.io/pause:3.3: (1.239200325s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 cache add registry.k8s.io/pause:latest
E0115 11:45:08.194175  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/client.crt: no such file or directory
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-997529 cache add registry.k8s.io/pause:latest: (1.212836431s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-997529 /tmp/TestFunctionalserialCacheCmdcacheadd_local3011165019/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 cache add minikube-local-cache-test:functional-997529
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-997529 cache add minikube-local-cache-test:functional-997529: (1.747741413s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 cache delete minikube-local-cache-test:functional-997529
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-997529
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
E0115 11:45:10.754535  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/client.crt: no such file or directory
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.94s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-997529 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (240.626449ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-997529 cache reload: (1.205763506s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.94s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 kubectl -- --context functional-997529 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-997529 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.33s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-997529 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0115 11:45:15.875563  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/client.crt: no such file or directory
E0115 11:45:26.116129  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/client.crt: no such file or directory
E0115 11:45:46.596684  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-997529 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.334213487s)
functional_test.go:757: restart took 44.334340573s for "functional-997529" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (44.33s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-997529 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-997529 logs: (1.5273161s)
--- PASS: TestFunctional/serial/LogsCmd (1.53s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 logs --file /tmp/TestFunctionalserialLogsFileCmd688045549/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-997529 logs --file /tmp/TestFunctionalserialLogsFileCmd688045549/001/logs.txt: (1.557871142s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.56s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.93s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-997529 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-997529
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-997529: exit status 115 (324.601976ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.53:32623 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-997529 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.93s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-997529 config get cpus: exit status 14 (79.395721ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-997529 config get cpus: exit status 14 (68.475194ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-997529 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-997529 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 217581: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.82s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-997529 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-997529 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (198.202211ms)

                                                
                                                
-- stdout --
	* [functional-997529] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17957
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17957-203994/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-203994/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 11:46:06.633934  217405 out.go:296] Setting OutFile to fd 1 ...
	I0115 11:46:06.634210  217405 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:46:06.634246  217405 out.go:309] Setting ErrFile to fd 2...
	I0115 11:46:06.634263  217405 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:46:06.634631  217405 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-203994/.minikube/bin
	I0115 11:46:06.635355  217405 out.go:303] Setting JSON to false
	I0115 11:46:06.636759  217405 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":16119,"bootTime":1705303048,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 11:46:06.636887  217405 start.go:138] virtualization: kvm guest
	I0115 11:46:06.639542  217405 out.go:177] * [functional-997529] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 11:46:06.641233  217405 out.go:177]   - MINIKUBE_LOCATION=17957
	I0115 11:46:06.641191  217405 notify.go:220] Checking for updates...
	I0115 11:46:06.643489  217405 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 11:46:06.645166  217405 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17957-203994/kubeconfig
	I0115 11:46:06.646828  217405 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-203994/.minikube
	I0115 11:46:06.648487  217405 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0115 11:46:06.650052  217405 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 11:46:06.652952  217405 config.go:182] Loaded profile config "functional-997529": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 11:46:06.653565  217405 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:46:06.653713  217405 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:46:06.671464  217405 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33427
	I0115 11:46:06.672114  217405 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:46:06.673853  217405 main.go:141] libmachine: Using API Version  1
	I0115 11:46:06.673869  217405 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:46:06.674434  217405 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:46:06.674700  217405 main.go:141] libmachine: (functional-997529) Calling .DriverName
	I0115 11:46:06.675005  217405 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 11:46:06.675896  217405 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:46:06.680620  217405 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:46:06.696752  217405 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41757
	I0115 11:46:06.697267  217405 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:46:06.697785  217405 main.go:141] libmachine: Using API Version  1
	I0115 11:46:06.697810  217405 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:46:06.703733  217405 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:46:06.704069  217405 main.go:141] libmachine: (functional-997529) Calling .DriverName
	I0115 11:46:06.745012  217405 out.go:177] * Using the kvm2 driver based on existing profile
	I0115 11:46:06.746537  217405 start.go:298] selected driver: kvm2
	I0115 11:46:06.746554  217405 start.go:902] validating driver "kvm2" against &{Name:functional-997529 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-997529 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.53 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 11:46:06.746655  217405 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 11:46:06.748880  217405 out.go:177] 
	W0115 11:46:06.750513  217405 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0115 11:46:06.752109  217405 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-997529 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-997529 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-997529 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (171.142071ms)

                                                
                                                
-- stdout --
	* [functional-997529] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17957
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17957-203994/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-203994/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 11:46:06.452367  217344 out.go:296] Setting OutFile to fd 1 ...
	I0115 11:46:06.452606  217344 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:46:06.452641  217344 out.go:309] Setting ErrFile to fd 2...
	I0115 11:46:06.452653  217344 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:46:06.452968  217344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-203994/.minikube/bin
	I0115 11:46:06.453534  217344 out.go:303] Setting JSON to false
	I0115 11:46:06.454724  217344 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":16119,"bootTime":1705303048,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 11:46:06.454881  217344 start.go:138] virtualization: kvm guest
	I0115 11:46:06.457669  217344 out.go:177] * [functional-997529] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0115 11:46:06.459611  217344 notify.go:220] Checking for updates...
	I0115 11:46:06.461399  217344 out.go:177]   - MINIKUBE_LOCATION=17957
	I0115 11:46:06.463040  217344 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 11:46:06.464529  217344 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17957-203994/kubeconfig
	I0115 11:46:06.465936  217344 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-203994/.minikube
	I0115 11:46:06.467474  217344 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0115 11:46:06.468952  217344 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 11:46:06.470831  217344 config.go:182] Loaded profile config "functional-997529": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 11:46:06.471368  217344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:46:06.471438  217344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:46:06.489217  217344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38993
	I0115 11:46:06.489656  217344 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:46:06.490310  217344 main.go:141] libmachine: Using API Version  1
	I0115 11:46:06.490345  217344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:46:06.490723  217344 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:46:06.490942  217344 main.go:141] libmachine: (functional-997529) Calling .DriverName
	I0115 11:46:06.491184  217344 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 11:46:06.491495  217344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:46:06.491541  217344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:46:06.508911  217344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35115
	I0115 11:46:06.509356  217344 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:46:06.509879  217344 main.go:141] libmachine: Using API Version  1
	I0115 11:46:06.509904  217344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:46:06.510234  217344 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:46:06.510435  217344 main.go:141] libmachine: (functional-997529) Calling .DriverName
	I0115 11:46:06.546570  217344 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0115 11:46:06.548278  217344 start.go:298] selected driver: kvm2
	I0115 11:46:06.548295  217344 start.go:902] validating driver "kvm2" against &{Name:functional-997529 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-997529 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.53 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 11:46:06.548387  217344 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 11:46:06.550925  217344 out.go:177] 
	W0115 11:46:06.552581  217344 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0115 11:46:06.554257  217344 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-997529 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-997529 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-d8cw8" [34442a74-3d1e-4af5-ae60-5efcf656dc5e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-d8cw8" [34442a74-3d1e-4af5-ae60-5efcf656dc5e] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.005377264s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.50.53:30306
functional_test.go:1674: http://192.168.50.53:30306: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-d8cw8

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.53:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.53:30306
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.55s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (40.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [029d5341-e83c-4a9e-b854-3661a1bedc6b] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004965257s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-997529 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-997529 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-997529 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-997529 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [016d33c7-2931-4fbd-abb5-b5aa8db89aa0] Pending
helpers_test.go:344: "sp-pod" [016d33c7-2931-4fbd-abb5-b5aa8db89aa0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [016d33c7-2931-4fbd-abb5-b5aa8db89aa0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.004671175s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-997529 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-997529 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-997529 delete -f testdata/storage-provisioner/pod.yaml: (2.11845807s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-997529 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [015fda1c-003b-44b9-897d-493bda8fba1a] Pending
helpers_test.go:344: "sp-pod" [015fda1c-003b-44b9-897d-493bda8fba1a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [015fda1c-003b-44b9-897d-493bda8fba1a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004772309s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-997529 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (40.07s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 ssh -n functional-997529 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 cp functional-997529:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd589398700/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 ssh -n functional-997529 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 ssh -n functional-997529 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (29.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-997529 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-zlqd4" [3f4c964f-5b3e-4d99-af73-a010f1c8facd] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-zlqd4" [3f4c964f-5b3e-4d99-af73-a010f1c8facd] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.005806924s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-997529 exec mysql-859648c796-zlqd4 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-997529 exec mysql-859648c796-zlqd4 -- mysql -ppassword -e "show databases;": exit status 1 (154.148308ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-997529 exec mysql-859648c796-zlqd4 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-997529 exec mysql-859648c796-zlqd4 -- mysql -ppassword -e "show databases;": exit status 1 (166.68905ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-997529 exec mysql-859648c796-zlqd4 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-997529 exec mysql-859648c796-zlqd4 -- mysql -ppassword -e "show databases;": exit status 1 (145.022145ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-997529 exec mysql-859648c796-zlqd4 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (29.45s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/211370/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 ssh "sudo cat /etc/test/nested/copy/211370/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/211370.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 ssh "sudo cat /etc/ssl/certs/211370.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/211370.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 ssh "sudo cat /usr/share/ca-certificates/211370.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/2113702.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 ssh "sudo cat /etc/ssl/certs/2113702.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/2113702.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 ssh "sudo cat /usr/share/ca-certificates/2113702.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-997529 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-997529 ssh "sudo systemctl is-active docker": exit status 1 (240.160446ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 ssh "sudo systemctl is-active crio"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-997529 ssh "sudo systemctl is-active crio": exit status 1 (259.619684ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-997529 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-997529 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-gnn5g" [a8eb0e0a-f408-4afe-aa05-68c5a22e8ecf] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-gnn5g" [a8eb0e0a-f408-4afe-aa05-68c5a22e8ecf] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.009002568s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-997529 /tmp/TestFunctionalparallelMountCmdany-port3595383856/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1705319165282914129" to /tmp/TestFunctionalparallelMountCmdany-port3595383856/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1705319165282914129" to /tmp/TestFunctionalparallelMountCmdany-port3595383856/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1705319165282914129" to /tmp/TestFunctionalparallelMountCmdany-port3595383856/001/test-1705319165282914129
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-997529 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (263.790852ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 15 11:46 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 15 11:46 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 15 11:46 test-1705319165282914129
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 ssh cat /mount-9p/test-1705319165282914129
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-997529 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [ee1fc862-9add-46e1-a4c6-477e1cb7cc95] Pending
helpers_test.go:344: "busybox-mount" [ee1fc862-9add-46e1-a4c6-477e1cb7cc95] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [ee1fc862-9add-46e1-a4c6-477e1cb7cc95] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [ee1fc862-9add-46e1-a4c6-477e1cb7cc95] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004445812s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-997529 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-997529 /tmp/TestFunctionalparallelMountCmdany-port3595383856/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.97s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "309.138633ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "69.175189ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "241.958075ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "75.432112ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-997529 /tmp/TestFunctionalparallelMountCmdspecific-port4287803083/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-997529 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (229.622592ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-997529 /tmp/TestFunctionalparallelMountCmdspecific-port4287803083/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-997529 ssh "sudo umount -f /mount-9p": exit status 1 (220.173281ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-997529 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-997529 /tmp/TestFunctionalparallelMountCmdspecific-port4287803083/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-997529 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1320669470/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-997529 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1320669470/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-997529 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1320669470/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-997529 ssh "findmnt -T" /mount1: exit status 1 (358.419475ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-997529 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-997529 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1320669470/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-997529 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1320669470/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-997529 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1320669470/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 service list -o json
functional_test.go:1493: Took "543.479259ms" to run "out/minikube-linux-amd64 -p functional-997529 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.50.53:30547
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.50.53:30547
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 version -o=json --components
E0115 11:46:27.557848  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/Version/components (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-997529 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-997529
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-997529
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-997529 image ls --format short --alsologtostderr:
I0115 11:46:42.013163  219239 out.go:296] Setting OutFile to fd 1 ...
I0115 11:46:42.013350  219239 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 11:46:42.013365  219239 out.go:309] Setting ErrFile to fd 2...
I0115 11:46:42.013372  219239 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 11:46:42.013689  219239 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-203994/.minikube/bin
I0115 11:46:42.014392  219239 config.go:182] Loaded profile config "functional-997529": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0115 11:46:42.014540  219239 config.go:182] Loaded profile config "functional-997529": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0115 11:46:42.015098  219239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0115 11:46:42.015157  219239 main.go:141] libmachine: Launching plugin server for driver kvm2
I0115 11:46:42.028914  219239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46291
I0115 11:46:42.029481  219239 main.go:141] libmachine: () Calling .GetVersion
I0115 11:46:42.030096  219239 main.go:141] libmachine: Using API Version  1
I0115 11:46:42.030124  219239 main.go:141] libmachine: () Calling .SetConfigRaw
I0115 11:46:42.030462  219239 main.go:141] libmachine: () Calling .GetMachineName
I0115 11:46:42.030703  219239 main.go:141] libmachine: (functional-997529) Calling .GetState
I0115 11:46:42.033085  219239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0115 11:46:42.033128  219239 main.go:141] libmachine: Launching plugin server for driver kvm2
I0115 11:46:42.048589  219239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44221
I0115 11:46:42.048987  219239 main.go:141] libmachine: () Calling .GetVersion
I0115 11:46:42.049525  219239 main.go:141] libmachine: Using API Version  1
I0115 11:46:42.049552  219239 main.go:141] libmachine: () Calling .SetConfigRaw
I0115 11:46:42.050015  219239 main.go:141] libmachine: () Calling .GetMachineName
I0115 11:46:42.050252  219239 main.go:141] libmachine: (functional-997529) Calling .DriverName
I0115 11:46:42.050514  219239 ssh_runner.go:195] Run: systemctl --version
I0115 11:46:42.050576  219239 main.go:141] libmachine: (functional-997529) Calling .GetSSHHostname
I0115 11:46:42.053613  219239 main.go:141] libmachine: (functional-997529) DBG | domain functional-997529 has defined MAC address 52:54:00:f2:eb:19 in network mk-functional-997529
I0115 11:46:42.053981  219239 main.go:141] libmachine: (functional-997529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:eb:19", ip: ""} in network mk-functional-997529: {Iface:virbr1 ExpiryTime:2024-01-15 12:44:10 +0000 UTC Type:0 Mac:52:54:00:f2:eb:19 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:functional-997529 Clientid:01:52:54:00:f2:eb:19}
I0115 11:46:42.054015  219239 main.go:141] libmachine: (functional-997529) DBG | domain functional-997529 has defined IP address 192.168.50.53 and MAC address 52:54:00:f2:eb:19 in network mk-functional-997529
I0115 11:46:42.054147  219239 main.go:141] libmachine: (functional-997529) Calling .GetSSHPort
I0115 11:46:42.054314  219239 main.go:141] libmachine: (functional-997529) Calling .GetSSHKeyPath
I0115 11:46:42.054459  219239 main.go:141] libmachine: (functional-997529) Calling .GetSSHUsername
I0115 11:46:42.054589  219239 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17957-203994/.minikube/machines/functional-997529/id_rsa Username:docker}
I0115 11:46:42.143246  219239 ssh_runner.go:195] Run: sudo crictl images --output json
I0115 11:46:42.233580  219239 main.go:141] libmachine: Making call to close driver server
I0115 11:46:42.233617  219239 main.go:141] libmachine: (functional-997529) Calling .Close
I0115 11:46:42.233889  219239 main.go:141] libmachine: Successfully made call to close driver server
I0115 11:46:42.233910  219239 main.go:141] libmachine: Making call to close connection to plugin binary
I0115 11:46:42.233920  219239 main.go:141] libmachine: Making call to close driver server
I0115 11:46:42.233938  219239 main.go:141] libmachine: (functional-997529) DBG | Closing plugin on server side
I0115 11:46:42.234000  219239 main.go:141] libmachine: (functional-997529) Calling .Close
I0115 11:46:42.234310  219239 main.go:141] libmachine: Successfully made call to close driver server
I0115 11:46:42.234327  219239 main.go:141] libmachine: Making call to close connection to plugin binary
I0115 11:46:42.234368  219239 main.go:141] libmachine: (functional-997529) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-997529 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:73deb9 | 103MB  |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| docker.io/kindest/kindnetd                  | v20230809-80a64d96 | sha256:c7d129 | 27.7MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:ead0a4 | 16.2MB |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| docker.io/library/minikube-local-cache-test | functional-997529  | sha256:3fa199 | 1.01kB |
| docker.io/library/nginx                     | latest             | sha256:a87587 | 70.5MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4            | sha256:d058aa | 33.4MB |
| registry.k8s.io/kube-scheduler              | v1.28.4            | sha256:e3db31 | 18.8MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
| docker.io/library/mysql                     | 5.7                | sha256:510733 | 138MB  |
| gcr.io/google-containers/addon-resizer      | functional-997529  | sha256:ffd4cf | 10.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/kube-apiserver              | v1.28.4            | sha256:7fe0e6 | 34.7MB |
| registry.k8s.io/kube-proxy                  | v1.28.4            | sha256:83f6cc | 24.6MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-997529 image ls --format table --alsologtostderr:
I0115 11:46:42.311977  219296 out.go:296] Setting OutFile to fd 1 ...
I0115 11:46:42.312114  219296 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 11:46:42.312122  219296 out.go:309] Setting ErrFile to fd 2...
I0115 11:46:42.312130  219296 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 11:46:42.312387  219296 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-203994/.minikube/bin
I0115 11:46:42.313214  219296 config.go:182] Loaded profile config "functional-997529": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0115 11:46:42.313367  219296 config.go:182] Loaded profile config "functional-997529": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0115 11:46:42.313995  219296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0115 11:46:42.314058  219296 main.go:141] libmachine: Launching plugin server for driver kvm2
I0115 11:46:42.329958  219296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42837
I0115 11:46:42.330394  219296 main.go:141] libmachine: () Calling .GetVersion
I0115 11:46:42.330950  219296 main.go:141] libmachine: Using API Version  1
I0115 11:46:42.330984  219296 main.go:141] libmachine: () Calling .SetConfigRaw
I0115 11:46:42.331420  219296 main.go:141] libmachine: () Calling .GetMachineName
I0115 11:46:42.331646  219296 main.go:141] libmachine: (functional-997529) Calling .GetState
I0115 11:46:42.333964  219296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0115 11:46:42.334011  219296 main.go:141] libmachine: Launching plugin server for driver kvm2
I0115 11:46:42.349881  219296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44955
I0115 11:46:42.350398  219296 main.go:141] libmachine: () Calling .GetVersion
I0115 11:46:42.350922  219296 main.go:141] libmachine: Using API Version  1
I0115 11:46:42.350949  219296 main.go:141] libmachine: () Calling .SetConfigRaw
I0115 11:46:42.351295  219296 main.go:141] libmachine: () Calling .GetMachineName
I0115 11:46:42.351506  219296 main.go:141] libmachine: (functional-997529) Calling .DriverName
I0115 11:46:42.351728  219296 ssh_runner.go:195] Run: systemctl --version
I0115 11:46:42.351762  219296 main.go:141] libmachine: (functional-997529) Calling .GetSSHHostname
I0115 11:46:42.355114  219296 main.go:141] libmachine: (functional-997529) DBG | domain functional-997529 has defined MAC address 52:54:00:f2:eb:19 in network mk-functional-997529
I0115 11:46:42.355589  219296 main.go:141] libmachine: (functional-997529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:eb:19", ip: ""} in network mk-functional-997529: {Iface:virbr1 ExpiryTime:2024-01-15 12:44:10 +0000 UTC Type:0 Mac:52:54:00:f2:eb:19 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:functional-997529 Clientid:01:52:54:00:f2:eb:19}
I0115 11:46:42.355704  219296 main.go:141] libmachine: (functional-997529) DBG | domain functional-997529 has defined IP address 192.168.50.53 and MAC address 52:54:00:f2:eb:19 in network mk-functional-997529
I0115 11:46:42.355779  219296 main.go:141] libmachine: (functional-997529) Calling .GetSSHPort
I0115 11:46:42.355962  219296 main.go:141] libmachine: (functional-997529) Calling .GetSSHKeyPath
I0115 11:46:42.356154  219296 main.go:141] libmachine: (functional-997529) Calling .GetSSHUsername
I0115 11:46:42.356319  219296 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17957-203994/.minikube/machines/functional-997529/id_rsa Username:docker}
I0115 11:46:42.450999  219296 ssh_runner.go:195] Run: sudo crictl images --output json
I0115 11:46:42.513872  219296 main.go:141] libmachine: Making call to close driver server
I0115 11:46:42.513891  219296 main.go:141] libmachine: (functional-997529) Calling .Close
I0115 11:46:42.514208  219296 main.go:141] libmachine: Successfully made call to close driver server
I0115 11:46:42.514235  219296 main.go:141] libmachine: Making call to close connection to plugin binary
I0115 11:46:42.514247  219296 main.go:141] libmachine: Making call to close driver server
I0115 11:46:42.514258  219296 main.go:141] libmachine: (functional-997529) Calling .Close
I0115 11:46:42.514475  219296 main.go:141] libmachine: Successfully made call to close driver server
I0115 11:46:42.514492  219296 main.go:141] libmachine: Making call to close connection to plugin binary
I0115 11:46:42.514519  219296 main.go:141] libmachine: (functional-997529) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-997529 image ls --format json --alsologtostderr:
[{"id":"sha256:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"27737299"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd
:3.5.9-0"],"size":"102894559"},{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:3fa1997b8cd712c425a6ea35ed1a9c86306f286a21989adc08ab1226c2e39b74","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-997529"],"size":"1007"},{"id":"sha256:a8758716bb6aa4d90071160d27028fe4eaee7ce8166221a97d30440c8eac2be6","repoDigests":["docker.io/library/nginx@sha256:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac"],"repoTags":["docker.io/library/nginx:latest"],"size":"70520324"},{"id":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:e63408a0f5
068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"24581402"},{"id":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"18834488"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"r
epoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"34683820"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-997529"],"size":"1
0823156"},{"id":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"16190758"},{"id":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"33420443"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-997529 image ls --format json --alsologtostderr:
I0115 11:46:42.294912  219285 out.go:296] Setting OutFile to fd 1 ...
I0115 11:46:42.295231  219285 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 11:46:42.295243  219285 out.go:309] Setting ErrFile to fd 2...
I0115 11:46:42.295248  219285 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 11:46:42.295483  219285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-203994/.minikube/bin
I0115 11:46:42.296302  219285 config.go:182] Loaded profile config "functional-997529": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0115 11:46:42.296411  219285 config.go:182] Loaded profile config "functional-997529": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0115 11:46:42.296856  219285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0115 11:46:42.296930  219285 main.go:141] libmachine: Launching plugin server for driver kvm2
I0115 11:46:42.311535  219285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45345
I0115 11:46:42.312092  219285 main.go:141] libmachine: () Calling .GetVersion
I0115 11:46:42.312745  219285 main.go:141] libmachine: Using API Version  1
I0115 11:46:42.312781  219285 main.go:141] libmachine: () Calling .SetConfigRaw
I0115 11:46:42.313133  219285 main.go:141] libmachine: () Calling .GetMachineName
I0115 11:46:42.313348  219285 main.go:141] libmachine: (functional-997529) Calling .GetState
I0115 11:46:42.315474  219285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0115 11:46:42.315511  219285 main.go:141] libmachine: Launching plugin server for driver kvm2
I0115 11:46:42.331609  219285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37937
I0115 11:46:42.332039  219285 main.go:141] libmachine: () Calling .GetVersion
I0115 11:46:42.332499  219285 main.go:141] libmachine: Using API Version  1
I0115 11:46:42.332525  219285 main.go:141] libmachine: () Calling .SetConfigRaw
I0115 11:46:42.332890  219285 main.go:141] libmachine: () Calling .GetMachineName
I0115 11:46:42.333102  219285 main.go:141] libmachine: (functional-997529) Calling .DriverName
I0115 11:46:42.333346  219285 ssh_runner.go:195] Run: systemctl --version
I0115 11:46:42.333380  219285 main.go:141] libmachine: (functional-997529) Calling .GetSSHHostname
I0115 11:46:42.336368  219285 main.go:141] libmachine: (functional-997529) DBG | domain functional-997529 has defined MAC address 52:54:00:f2:eb:19 in network mk-functional-997529
I0115 11:46:42.336831  219285 main.go:141] libmachine: (functional-997529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:eb:19", ip: ""} in network mk-functional-997529: {Iface:virbr1 ExpiryTime:2024-01-15 12:44:10 +0000 UTC Type:0 Mac:52:54:00:f2:eb:19 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:functional-997529 Clientid:01:52:54:00:f2:eb:19}
I0115 11:46:42.336860  219285 main.go:141] libmachine: (functional-997529) DBG | domain functional-997529 has defined IP address 192.168.50.53 and MAC address 52:54:00:f2:eb:19 in network mk-functional-997529
I0115 11:46:42.337037  219285 main.go:141] libmachine: (functional-997529) Calling .GetSSHPort
I0115 11:46:42.337272  219285 main.go:141] libmachine: (functional-997529) Calling .GetSSHKeyPath
I0115 11:46:42.337457  219285 main.go:141] libmachine: (functional-997529) Calling .GetSSHUsername
I0115 11:46:42.337630  219285 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17957-203994/.minikube/machines/functional-997529/id_rsa Username:docker}
I0115 11:46:42.422217  219285 ssh_runner.go:195] Run: sudo crictl images --output json
I0115 11:46:42.472236  219285 main.go:141] libmachine: Making call to close driver server
I0115 11:46:42.472255  219285 main.go:141] libmachine: (functional-997529) Calling .Close
I0115 11:46:42.472574  219285 main.go:141] libmachine: Successfully made call to close driver server
I0115 11:46:42.472595  219285 main.go:141] libmachine: Making call to close connection to plugin binary
I0115 11:46:42.472605  219285 main.go:141] libmachine: Making call to close driver server
I0115 11:46:42.472617  219285 main.go:141] libmachine: (functional-997529) Calling .Close
I0115 11:46:42.472875  219285 main.go:141] libmachine: Successfully made call to close driver server
I0115 11:46:42.472887  219285 main.go:141] libmachine: Making call to close connection to plugin binary
I0115 11:46:42.472905  219285 main.go:141] libmachine: (functional-997529) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-997529 image ls --format yaml --alsologtostderr:
- id: sha256:3fa1997b8cd712c425a6ea35ed1a9c86306f286a21989adc08ab1226c2e39b74
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-997529
size: "1007"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:a8758716bb6aa4d90071160d27028fe4eaee7ce8166221a97d30440c8eac2be6
repoDigests:
- docker.io/library/nginx@sha256:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac
repoTags:
- docker.io/library/nginx:latest
size: "70520324"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "16190758"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "18834488"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-997529
size: "10823156"
- id: sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "102894559"
- id: sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "24581402"
- id: sha256:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "27737299"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "34683820"
- id: sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "33420443"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-997529 image ls --format yaml --alsologtostderr:
I0115 11:46:42.012257  219240 out.go:296] Setting OutFile to fd 1 ...
I0115 11:46:42.012402  219240 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 11:46:42.012420  219240 out.go:309] Setting ErrFile to fd 2...
I0115 11:46:42.012428  219240 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 11:46:42.012677  219240 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-203994/.minikube/bin
I0115 11:46:42.013382  219240 config.go:182] Loaded profile config "functional-997529": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0115 11:46:42.013524  219240 config.go:182] Loaded profile config "functional-997529": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0115 11:46:42.014011  219240 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0115 11:46:42.014077  219240 main.go:141] libmachine: Launching plugin server for driver kvm2
I0115 11:46:42.028522  219240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34735
I0115 11:46:42.029096  219240 main.go:141] libmachine: () Calling .GetVersion
I0115 11:46:42.029830  219240 main.go:141] libmachine: Using API Version  1
I0115 11:46:42.029863  219240 main.go:141] libmachine: () Calling .SetConfigRaw
I0115 11:46:42.030515  219240 main.go:141] libmachine: () Calling .GetMachineName
I0115 11:46:42.030726  219240 main.go:141] libmachine: (functional-997529) Calling .GetState
I0115 11:46:42.032875  219240 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0115 11:46:42.032930  219240 main.go:141] libmachine: Launching plugin server for driver kvm2
I0115 11:46:42.046999  219240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38345
I0115 11:46:42.047457  219240 main.go:141] libmachine: () Calling .GetVersion
I0115 11:46:42.048061  219240 main.go:141] libmachine: Using API Version  1
I0115 11:46:42.048105  219240 main.go:141] libmachine: () Calling .SetConfigRaw
I0115 11:46:42.048469  219240 main.go:141] libmachine: () Calling .GetMachineName
I0115 11:46:42.048659  219240 main.go:141] libmachine: (functional-997529) Calling .DriverName
I0115 11:46:42.048898  219240 ssh_runner.go:195] Run: systemctl --version
I0115 11:46:42.048928  219240 main.go:141] libmachine: (functional-997529) Calling .GetSSHHostname
I0115 11:46:42.052409  219240 main.go:141] libmachine: (functional-997529) DBG | domain functional-997529 has defined MAC address 52:54:00:f2:eb:19 in network mk-functional-997529
I0115 11:46:42.052901  219240 main.go:141] libmachine: (functional-997529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:eb:19", ip: ""} in network mk-functional-997529: {Iface:virbr1 ExpiryTime:2024-01-15 12:44:10 +0000 UTC Type:0 Mac:52:54:00:f2:eb:19 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:functional-997529 Clientid:01:52:54:00:f2:eb:19}
I0115 11:46:42.052930  219240 main.go:141] libmachine: (functional-997529) DBG | domain functional-997529 has defined IP address 192.168.50.53 and MAC address 52:54:00:f2:eb:19 in network mk-functional-997529
I0115 11:46:42.053016  219240 main.go:141] libmachine: (functional-997529) Calling .GetSSHPort
I0115 11:46:42.053245  219240 main.go:141] libmachine: (functional-997529) Calling .GetSSHKeyPath
I0115 11:46:42.053385  219240 main.go:141] libmachine: (functional-997529) Calling .GetSSHUsername
I0115 11:46:42.053587  219240 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17957-203994/.minikube/machines/functional-997529/id_rsa Username:docker}
I0115 11:46:42.143510  219240 ssh_runner.go:195] Run: sudo crictl images --output json
I0115 11:46:42.217570  219240 main.go:141] libmachine: Making call to close driver server
I0115 11:46:42.217590  219240 main.go:141] libmachine: (functional-997529) Calling .Close
I0115 11:46:42.217892  219240 main.go:141] libmachine: Successfully made call to close driver server
I0115 11:46:42.217915  219240 main.go:141] libmachine: Making call to close connection to plugin binary
I0115 11:46:42.217924  219240 main.go:141] libmachine: Making call to close driver server
I0115 11:46:42.217935  219240 main.go:141] libmachine: (functional-997529) Calling .Close
I0115 11:46:42.218196  219240 main.go:141] libmachine: Successfully made call to close driver server
I0115 11:46:42.218216  219240 main.go:141] libmachine: Making call to close connection to plugin binary
I0115 11:46:42.218271  219240 main.go:141] libmachine: (functional-997529) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-997529 ssh pgrep buildkitd: exit status 1 (215.418694ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 image build -t localhost/my-image:functional-997529 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-997529 image build -t localhost/my-image:functional-997529 testdata/build --alsologtostderr: (3.633724252s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-997529 image build -t localhost/my-image:functional-997529 testdata/build --alsologtostderr:
I0115 11:46:42.757916  219363 out.go:296] Setting OutFile to fd 1 ...
I0115 11:46:42.758027  219363 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 11:46:42.758036  219363 out.go:309] Setting ErrFile to fd 2...
I0115 11:46:42.758041  219363 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 11:46:42.758232  219363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-203994/.minikube/bin
I0115 11:46:42.758822  219363 config.go:182] Loaded profile config "functional-997529": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0115 11:46:42.759420  219363 config.go:182] Loaded profile config "functional-997529": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0115 11:46:42.759924  219363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0115 11:46:42.760021  219363 main.go:141] libmachine: Launching plugin server for driver kvm2
I0115 11:46:42.774539  219363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41579
I0115 11:46:42.775109  219363 main.go:141] libmachine: () Calling .GetVersion
I0115 11:46:42.775666  219363 main.go:141] libmachine: Using API Version  1
I0115 11:46:42.775693  219363 main.go:141] libmachine: () Calling .SetConfigRaw
I0115 11:46:42.776078  219363 main.go:141] libmachine: () Calling .GetMachineName
I0115 11:46:42.776274  219363 main.go:141] libmachine: (functional-997529) Calling .GetState
I0115 11:46:42.778150  219363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0115 11:46:42.778196  219363 main.go:141] libmachine: Launching plugin server for driver kvm2
I0115 11:46:42.793388  219363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38647
I0115 11:46:42.793862  219363 main.go:141] libmachine: () Calling .GetVersion
I0115 11:46:42.794345  219363 main.go:141] libmachine: Using API Version  1
I0115 11:46:42.794371  219363 main.go:141] libmachine: () Calling .SetConfigRaw
I0115 11:46:42.794795  219363 main.go:141] libmachine: () Calling .GetMachineName
I0115 11:46:42.795024  219363 main.go:141] libmachine: (functional-997529) Calling .DriverName
I0115 11:46:42.795292  219363 ssh_runner.go:195] Run: systemctl --version
I0115 11:46:42.795328  219363 main.go:141] libmachine: (functional-997529) Calling .GetSSHHostname
I0115 11:46:42.798421  219363 main.go:141] libmachine: (functional-997529) DBG | domain functional-997529 has defined MAC address 52:54:00:f2:eb:19 in network mk-functional-997529
I0115 11:46:42.798871  219363 main.go:141] libmachine: (functional-997529) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:eb:19", ip: ""} in network mk-functional-997529: {Iface:virbr1 ExpiryTime:2024-01-15 12:44:10 +0000 UTC Type:0 Mac:52:54:00:f2:eb:19 Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:functional-997529 Clientid:01:52:54:00:f2:eb:19}
I0115 11:46:42.798905  219363 main.go:141] libmachine: (functional-997529) DBG | domain functional-997529 has defined IP address 192.168.50.53 and MAC address 52:54:00:f2:eb:19 in network mk-functional-997529
I0115 11:46:42.799073  219363 main.go:141] libmachine: (functional-997529) Calling .GetSSHPort
I0115 11:46:42.799258  219363 main.go:141] libmachine: (functional-997529) Calling .GetSSHKeyPath
I0115 11:46:42.799426  219363 main.go:141] libmachine: (functional-997529) Calling .GetSSHUsername
I0115 11:46:42.799596  219363 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17957-203994/.minikube/machines/functional-997529/id_rsa Username:docker}
I0115 11:46:42.883692  219363 build_images.go:151] Building image from path: /tmp/build.4114981333.tar
I0115 11:46:42.883756  219363 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0115 11:46:42.899353  219363 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4114981333.tar
I0115 11:46:42.907917  219363 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4114981333.tar: stat -c "%s %y" /var/lib/minikube/build/build.4114981333.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4114981333.tar': No such file or directory
I0115 11:46:42.907950  219363 ssh_runner.go:362] scp /tmp/build.4114981333.tar --> /var/lib/minikube/build/build.4114981333.tar (3072 bytes)
I0115 11:46:42.941298  219363 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4114981333
I0115 11:46:42.953928  219363 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4114981333 -xf /var/lib/minikube/build/build.4114981333.tar
I0115 11:46:42.967067  219363 containerd.go:379] Building image: /var/lib/minikube/build/build.4114981333
I0115 11:46:42.967134  219363 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4114981333 --local dockerfile=/var/lib/minikube/build/build.4114981333 --output type=image,name=localhost/my-image:functional-997529
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context:
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.0s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:d7d3f48e314be526d930668c7f5c407667c6205c58f2380536a5ecc239b4a588 0.0s done
#8 exporting config sha256:82bbdb04da8be971b07e0f9c3f41dc5527d9a208d11791bc5147ec226e832b30 0.0s done
#8 naming to localhost/my-image:functional-997529
#8 naming to localhost/my-image:functional-997529 done
#8 DONE 0.3s
I0115 11:46:46.289774  219363 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4114981333 --local dockerfile=/var/lib/minikube/build/build.4114981333 --output type=image,name=localhost/my-image:functional-997529: (3.322605032s)
I0115 11:46:46.289854  219363 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4114981333
I0115 11:46:46.310568  219363 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4114981333.tar
I0115 11:46:46.323181  219363 build_images.go:207] Built localhost/my-image:functional-997529 from /tmp/build.4114981333.tar
I0115 11:46:46.323219  219363 build_images.go:123] succeeded building to: functional-997529
I0115 11:46:46.323226  219363 build_images.go:124] failed building to: 
I0115 11:46:46.323258  219363 main.go:141] libmachine: Making call to close driver server
I0115 11:46:46.323273  219363 main.go:141] libmachine: (functional-997529) Calling .Close
I0115 11:46:46.323610  219363 main.go:141] libmachine: Successfully made call to close driver server
I0115 11:46:46.323651  219363 main.go:141] libmachine: Making call to close connection to plugin binary
I0115 11:46:46.323662  219363 main.go:141] libmachine: Making call to close driver server
I0115 11:46:46.323673  219363 main.go:141] libmachine: (functional-997529) Calling .Close
I0115 11:46:46.323939  219363 main.go:141] libmachine: (functional-997529) DBG | Closing plugin on server side
I0115 11:46:46.323960  219363 main.go:141] libmachine: Successfully made call to close driver server
I0115 11:46:46.323986  219363 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.20980071s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-997529
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 image load --daemon gcr.io/google-containers/addon-resizer:functional-997529 --alsologtostderr
2024/01/15 11:46:22 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-997529 image load --daemon gcr.io/google-containers/addon-resizer:functional-997529 --alsologtostderr: (4.823328667s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 image load --daemon gcr.io/google-containers/addon-resizer:functional-997529 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-997529 image load --daemon gcr.io/google-containers/addon-resizer:functional-997529 --alsologtostderr: (3.16685788s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.133520799s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-997529
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 image load --daemon gcr.io/google-containers/addon-resizer:functional-997529 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-997529 image load --daemon gcr.io/google-containers/addon-resizer:functional-997529 --alsologtostderr: (5.016594495s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 image save gcr.io/google-containers/addon-resizer:functional-997529 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-997529 image save gcr.io/google-containers/addon-resizer:functional-997529 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.536806286s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 image rm gcr.io/google-containers/addon-resizer:functional-997529 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-997529 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.648811614s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-997529
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-997529 image save --daemon gcr.io/google-containers/addon-resizer:functional-997529 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-997529 image save --daemon gcr.io/google-containers/addon-resizer:functional-997529 --alsologtostderr: (1.491878759s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-997529
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.53s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-997529
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-997529
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-997529
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (105.96s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-868201 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E0115 11:47:49.478692  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-868201 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m45.964626806s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (105.96s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.53s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-868201 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-868201 addons enable ingress --alsologtostderr -v=5: (11.526825604s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.53s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.67s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-868201 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.67s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (30.64s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-868201 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-868201 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (9.473745806s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-868201 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-868201 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [f9d4714e-00ef-460c-b2ce-84eed785f14a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [f9d4714e-00ef-460c-b2ce-84eed785f14a] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.004448515s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-868201 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-868201 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-868201 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.169
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-868201 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-868201 addons disable ingress-dns --alsologtostderr -v=1: (2.320995035s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-868201 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-868201 addons disable ingress --alsologtostderr -v=1: (7.630950109s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (30.64s)

                                                
                                    
x
+
TestJSONOutput/start/Command (66.78s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-303962 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
E0115 11:50:05.633683  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-303962 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m6.774746479s)
--- PASS: TestJSONOutput/start/Command (66.78s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-303962 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-303962 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.12s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-303962 --output=json --user=testUser
E0115 11:50:33.319379  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-303962 --output=json --user=testUser: (7.115649839s)
--- PASS: TestJSONOutput/stop/Command (7.12s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-321723 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-321723 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (83.361364ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7876515a-81dc-44af-a05f-0e65cf0ef3f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-321723] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1a6e28ed-851b-4486-9b22-45a7c05f289c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17957"}}
	{"specversion":"1.0","id":"f3da8706-1ff2-4c36-b21d-6e6014c3c477","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"99adbe32-c11a-4836-9b16-af3b2a934b06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17957-203994/kubeconfig"}}
	{"specversion":"1.0","id":"78520f0a-61d9-48b5-a7c2-85fbc2989ca9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-203994/.minikube"}}
	{"specversion":"1.0","id":"053b2db2-e6bc-4307-bc1b-a1247d078f89","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0f1374c6-9513-49fa-ba0a-0ec1ea1a432e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"33bff74f-19e9-4444-b5b6-4de8503c0062","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-321723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-321723
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (100.7s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-721457 --driver=kvm2  --container-runtime=containerd
E0115 11:51:05.040865  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/functional-997529/client.crt: no such file or directory
E0115 11:51:05.046285  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/functional-997529/client.crt: no such file or directory
E0115 11:51:05.056661  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/functional-997529/client.crt: no such file or directory
E0115 11:51:05.077042  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/functional-997529/client.crt: no such file or directory
E0115 11:51:05.117455  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/functional-997529/client.crt: no such file or directory
E0115 11:51:05.197916  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/functional-997529/client.crt: no such file or directory
E0115 11:51:05.358409  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/functional-997529/client.crt: no such file or directory
E0115 11:51:05.679204  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/functional-997529/client.crt: no such file or directory
E0115 11:51:06.320197  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/functional-997529/client.crt: no such file or directory
E0115 11:51:07.600882  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/functional-997529/client.crt: no such file or directory
E0115 11:51:10.161935  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/functional-997529/client.crt: no such file or directory
E0115 11:51:15.283071  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/functional-997529/client.crt: no such file or directory
E0115 11:51:25.523325  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/functional-997529/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-721457 --driver=kvm2  --container-runtime=containerd: (50.101133563s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-723905 --driver=kvm2  --container-runtime=containerd
E0115 11:51:46.003537  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/functional-997529/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-723905 --driver=kvm2  --container-runtime=containerd: (47.69314783s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-721457
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-723905
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-723905" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-723905
helpers_test.go:175: Cleaning up "first-721457" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-721457
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-721457: (1.01445464s)
--- PASS: TestMinikubeProfile (100.70s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.46s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-550876 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0115 11:52:26.964438  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/functional-997529/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-550876 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (27.463773894s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-550876 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-550876 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.42s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.85s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-566904 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-566904 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (28.850918489s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-566904 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-566904 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.43s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-550876 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-566904 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-566904 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.43s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.44s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-566904
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-566904: (1.439571128s)
--- PASS: TestMountStart/serial/Stop (1.44s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (26.97s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-566904
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-566904: (25.972962713s)
E0115 11:53:48.885290  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/functional-997529/client.crt: no such file or directory
--- PASS: TestMountStart/serial/RestartStopped (26.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-566904 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-566904 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (112.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-427024 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0115 11:53:51.986162  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/ingress-addon-legacy-868201/client.crt: no such file or directory
E0115 11:53:51.991475  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/ingress-addon-legacy-868201/client.crt: no such file or directory
E0115 11:53:52.001733  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/ingress-addon-legacy-868201/client.crt: no such file or directory
E0115 11:53:52.022092  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/ingress-addon-legacy-868201/client.crt: no such file or directory
E0115 11:53:52.062388  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/ingress-addon-legacy-868201/client.crt: no such file or directory
E0115 11:53:52.142806  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/ingress-addon-legacy-868201/client.crt: no such file or directory
E0115 11:53:52.303865  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/ingress-addon-legacy-868201/client.crt: no such file or directory
E0115 11:53:52.624833  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/ingress-addon-legacy-868201/client.crt: no such file or directory
E0115 11:53:53.265575  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/ingress-addon-legacy-868201/client.crt: no such file or directory
E0115 11:53:54.546792  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/ingress-addon-legacy-868201/client.crt: no such file or directory
E0115 11:53:57.107652  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/ingress-addon-legacy-868201/client.crt: no such file or directory
E0115 11:54:02.227943  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/ingress-addon-legacy-868201/client.crt: no such file or directory
E0115 11:54:12.468391  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/ingress-addon-legacy-868201/client.crt: no such file or directory
E0115 11:54:32.948936  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/ingress-addon-legacy-868201/client.crt: no such file or directory
E0115 11:55:05.632849  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/client.crt: no such file or directory
E0115 11:55:13.909296  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/ingress-addon-legacy-868201/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-427024 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m51.923356737s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (112.38s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-427024 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-427024 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-427024 -- rollout status deployment/busybox: (3.127299661s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-427024 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-427024 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-427024 -- exec busybox-5bc68d56bd-g7tjz -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-427024 -- exec busybox-5bc68d56bd-lvjvd -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-427024 -- exec busybox-5bc68d56bd-g7tjz -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-427024 -- exec busybox-5bc68d56bd-lvjvd -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-427024 -- exec busybox-5bc68d56bd-g7tjz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-427024 -- exec busybox-5bc68d56bd-lvjvd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.01s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-427024 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-427024 -- exec busybox-5bc68d56bd-g7tjz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-427024 -- exec busybox-5bc68d56bd-g7tjz -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-427024 -- exec busybox-5bc68d56bd-lvjvd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-427024 -- exec busybox-5bc68d56bd-lvjvd -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.96s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-427024 -v 3 --alsologtostderr
E0115 11:56:05.039690  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/functional-997529/client.crt: no such file or directory
E0115 11:56:32.725913  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/functional-997529/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-427024 -v 3 --alsologtostderr: (45.020775336s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.63s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-427024 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 cp testdata/cp-test.txt multinode-427024:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 ssh -n multinode-427024 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 cp multinode-427024:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2627771537/001/cp-test_multinode-427024.txt
E0115 11:56:35.830385  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/ingress-addon-legacy-868201/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 ssh -n multinode-427024 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 cp multinode-427024:/home/docker/cp-test.txt multinode-427024-m02:/home/docker/cp-test_multinode-427024_multinode-427024-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 ssh -n multinode-427024 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 ssh -n multinode-427024-m02 "sudo cat /home/docker/cp-test_multinode-427024_multinode-427024-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 cp multinode-427024:/home/docker/cp-test.txt multinode-427024-m03:/home/docker/cp-test_multinode-427024_multinode-427024-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 ssh -n multinode-427024 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 ssh -n multinode-427024-m03 "sudo cat /home/docker/cp-test_multinode-427024_multinode-427024-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 cp testdata/cp-test.txt multinode-427024-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 ssh -n multinode-427024-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 cp multinode-427024-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2627771537/001/cp-test_multinode-427024-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 ssh -n multinode-427024-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 cp multinode-427024-m02:/home/docker/cp-test.txt multinode-427024:/home/docker/cp-test_multinode-427024-m02_multinode-427024.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 ssh -n multinode-427024-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 ssh -n multinode-427024 "sudo cat /home/docker/cp-test_multinode-427024-m02_multinode-427024.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 cp multinode-427024-m02:/home/docker/cp-test.txt multinode-427024-m03:/home/docker/cp-test_multinode-427024-m02_multinode-427024-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 ssh -n multinode-427024-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 ssh -n multinode-427024-m03 "sudo cat /home/docker/cp-test_multinode-427024-m02_multinode-427024-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 cp testdata/cp-test.txt multinode-427024-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 ssh -n multinode-427024-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 cp multinode-427024-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2627771537/001/cp-test_multinode-427024-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 ssh -n multinode-427024-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 cp multinode-427024-m03:/home/docker/cp-test.txt multinode-427024:/home/docker/cp-test_multinode-427024-m03_multinode-427024.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 ssh -n multinode-427024-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 ssh -n multinode-427024 "sudo cat /home/docker/cp-test_multinode-427024-m03_multinode-427024.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 cp multinode-427024-m03:/home/docker/cp-test.txt multinode-427024-m02:/home/docker/cp-test_multinode-427024-m03_multinode-427024-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 ssh -n multinode-427024-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 ssh -n multinode-427024-m02 "sudo cat /home/docker/cp-test_multinode-427024-m03_multinode-427024-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.93s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-427024 node stop m03: (1.350575057s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-427024 status: exit status 7 (467.011446ms)

                                                
                                                
-- stdout --
	multinode-427024
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-427024-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-427024-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-427024 status --alsologtostderr: exit status 7 (456.987794ms)

                                                
                                                
-- stdout --
	multinode-427024
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-427024-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-427024-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 11:56:44.477079  225751 out.go:296] Setting OutFile to fd 1 ...
	I0115 11:56:44.477213  225751 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:56:44.477223  225751 out.go:309] Setting ErrFile to fd 2...
	I0115 11:56:44.477228  225751 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:56:44.477426  225751 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-203994/.minikube/bin
	I0115 11:56:44.477594  225751 out.go:303] Setting JSON to false
	I0115 11:56:44.477630  225751 mustload.go:65] Loading cluster: multinode-427024
	I0115 11:56:44.477762  225751 notify.go:220] Checking for updates...
	I0115 11:56:44.478024  225751 config.go:182] Loaded profile config "multinode-427024": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 11:56:44.478038  225751 status.go:255] checking status of multinode-427024 ...
	I0115 11:56:44.478427  225751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:56:44.478488  225751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:56:44.499185  225751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43001
	I0115 11:56:44.499624  225751 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:56:44.500241  225751 main.go:141] libmachine: Using API Version  1
	I0115 11:56:44.500268  225751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:56:44.500702  225751 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:56:44.500928  225751 main.go:141] libmachine: (multinode-427024) Calling .GetState
	I0115 11:56:44.502698  225751 status.go:330] multinode-427024 host status = "Running" (err=<nil>)
	I0115 11:56:44.502720  225751 host.go:66] Checking if "multinode-427024" exists ...
	I0115 11:56:44.503008  225751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:56:44.503049  225751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:56:44.517902  225751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33305
	I0115 11:56:44.518245  225751 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:56:44.518672  225751 main.go:141] libmachine: Using API Version  1
	I0115 11:56:44.518697  225751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:56:44.519037  225751 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:56:44.519175  225751 main.go:141] libmachine: (multinode-427024) Calling .GetIP
	I0115 11:56:44.521973  225751 main.go:141] libmachine: (multinode-427024) DBG | domain multinode-427024 has defined MAC address 52:54:00:99:61:c4 in network mk-multinode-427024
	I0115 11:56:44.522389  225751 main.go:141] libmachine: (multinode-427024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:61:c4", ip: ""} in network mk-multinode-427024: {Iface:virbr1 ExpiryTime:2024-01-15 12:54:06 +0000 UTC Type:0 Mac:52:54:00:99:61:c4 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-427024 Clientid:01:52:54:00:99:61:c4}
	I0115 11:56:44.522422  225751 main.go:141] libmachine: (multinode-427024) DBG | domain multinode-427024 has defined IP address 192.168.39.123 and MAC address 52:54:00:99:61:c4 in network mk-multinode-427024
	I0115 11:56:44.522535  225751 host.go:66] Checking if "multinode-427024" exists ...
	I0115 11:56:44.522818  225751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:56:44.522851  225751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:56:44.538014  225751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36535
	I0115 11:56:44.538478  225751 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:56:44.538921  225751 main.go:141] libmachine: Using API Version  1
	I0115 11:56:44.538944  225751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:56:44.539288  225751 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:56:44.539480  225751 main.go:141] libmachine: (multinode-427024) Calling .DriverName
	I0115 11:56:44.539698  225751 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 11:56:44.539727  225751 main.go:141] libmachine: (multinode-427024) Calling .GetSSHHostname
	I0115 11:56:44.542178  225751 main.go:141] libmachine: (multinode-427024) DBG | domain multinode-427024 has defined MAC address 52:54:00:99:61:c4 in network mk-multinode-427024
	I0115 11:56:44.542610  225751 main.go:141] libmachine: (multinode-427024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:61:c4", ip: ""} in network mk-multinode-427024: {Iface:virbr1 ExpiryTime:2024-01-15 12:54:06 +0000 UTC Type:0 Mac:52:54:00:99:61:c4 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-427024 Clientid:01:52:54:00:99:61:c4}
	I0115 11:56:44.542641  225751 main.go:141] libmachine: (multinode-427024) DBG | domain multinode-427024 has defined IP address 192.168.39.123 and MAC address 52:54:00:99:61:c4 in network mk-multinode-427024
	I0115 11:56:44.542766  225751 main.go:141] libmachine: (multinode-427024) Calling .GetSSHPort
	I0115 11:56:44.542948  225751 main.go:141] libmachine: (multinode-427024) Calling .GetSSHKeyPath
	I0115 11:56:44.543088  225751 main.go:141] libmachine: (multinode-427024) Calling .GetSSHUsername
	I0115 11:56:44.543231  225751 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17957-203994/.minikube/machines/multinode-427024/id_rsa Username:docker}
	I0115 11:56:44.631571  225751 ssh_runner.go:195] Run: systemctl --version
	I0115 11:56:44.638396  225751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 11:56:44.652969  225751 kubeconfig.go:92] found "multinode-427024" server: "https://192.168.39.123:8443"
	I0115 11:56:44.653002  225751 api_server.go:166] Checking apiserver status ...
	I0115 11:56:44.653036  225751 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 11:56:44.665281  225751 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1068/cgroup
	I0115 11:56:44.674877  225751 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pode6673915f4ed9ab5fb51d4d9269c17ad/1ae1ec8478a1c91fbca9623e5e704254d86eb7e586302376d197ca711c5b8b15"
	I0115 11:56:44.674956  225751 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pode6673915f4ed9ab5fb51d4d9269c17ad/1ae1ec8478a1c91fbca9623e5e704254d86eb7e586302376d197ca711c5b8b15/freezer.state
	I0115 11:56:44.684482  225751 api_server.go:204] freezer state: "THAWED"
	I0115 11:56:44.684512  225751 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8443/healthz ...
	I0115 11:56:44.689509  225751 api_server.go:279] https://192.168.39.123:8443/healthz returned 200:
	ok
	I0115 11:56:44.689533  225751 status.go:421] multinode-427024 apiserver status = Running (err=<nil>)
	I0115 11:56:44.689546  225751 status.go:257] multinode-427024 status: &{Name:multinode-427024 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 11:56:44.689575  225751 status.go:255] checking status of multinode-427024-m02 ...
	I0115 11:56:44.689909  225751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:56:44.689962  225751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:56:44.704909  225751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43537
	I0115 11:56:44.705360  225751 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:56:44.705851  225751 main.go:141] libmachine: Using API Version  1
	I0115 11:56:44.705875  225751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:56:44.706218  225751 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:56:44.706410  225751 main.go:141] libmachine: (multinode-427024-m02) Calling .GetState
	I0115 11:56:44.708049  225751 status.go:330] multinode-427024-m02 host status = "Running" (err=<nil>)
	I0115 11:56:44.708076  225751 host.go:66] Checking if "multinode-427024-m02" exists ...
	I0115 11:56:44.708349  225751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:56:44.708382  225751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:56:44.723026  225751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42469
	I0115 11:56:44.723441  225751 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:56:44.723904  225751 main.go:141] libmachine: Using API Version  1
	I0115 11:56:44.723925  225751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:56:44.724212  225751 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:56:44.724409  225751 main.go:141] libmachine: (multinode-427024-m02) Calling .GetIP
	I0115 11:56:44.726997  225751 main.go:141] libmachine: (multinode-427024-m02) DBG | domain multinode-427024-m02 has defined MAC address 52:54:00:76:26:26 in network mk-multinode-427024
	I0115 11:56:44.727384  225751 main.go:141] libmachine: (multinode-427024-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:26:26", ip: ""} in network mk-multinode-427024: {Iface:virbr1 ExpiryTime:2024-01-15 12:55:13 +0000 UTC Type:0 Mac:52:54:00:76:26:26 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-427024-m02 Clientid:01:52:54:00:76:26:26}
	I0115 11:56:44.727415  225751 main.go:141] libmachine: (multinode-427024-m02) DBG | domain multinode-427024-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:76:26:26 in network mk-multinode-427024
	I0115 11:56:44.727571  225751 host.go:66] Checking if "multinode-427024-m02" exists ...
	I0115 11:56:44.727927  225751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:56:44.727969  225751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:56:44.742374  225751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42637
	I0115 11:56:44.742793  225751 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:56:44.743221  225751 main.go:141] libmachine: Using API Version  1
	I0115 11:56:44.743244  225751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:56:44.743578  225751 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:56:44.743759  225751 main.go:141] libmachine: (multinode-427024-m02) Calling .DriverName
	I0115 11:56:44.743964  225751 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 11:56:44.743984  225751 main.go:141] libmachine: (multinode-427024-m02) Calling .GetSSHHostname
	I0115 11:56:44.746574  225751 main.go:141] libmachine: (multinode-427024-m02) DBG | domain multinode-427024-m02 has defined MAC address 52:54:00:76:26:26 in network mk-multinode-427024
	I0115 11:56:44.746979  225751 main.go:141] libmachine: (multinode-427024-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:26:26", ip: ""} in network mk-multinode-427024: {Iface:virbr1 ExpiryTime:2024-01-15 12:55:13 +0000 UTC Type:0 Mac:52:54:00:76:26:26 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-427024-m02 Clientid:01:52:54:00:76:26:26}
	I0115 11:56:44.747011  225751 main.go:141] libmachine: (multinode-427024-m02) DBG | domain multinode-427024-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:76:26:26 in network mk-multinode-427024
	I0115 11:56:44.747124  225751 main.go:141] libmachine: (multinode-427024-m02) Calling .GetSSHPort
	I0115 11:56:44.747308  225751 main.go:141] libmachine: (multinode-427024-m02) Calling .GetSSHKeyPath
	I0115 11:56:44.747496  225751 main.go:141] libmachine: (multinode-427024-m02) Calling .GetSSHUsername
	I0115 11:56:44.747694  225751 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17957-203994/.minikube/machines/multinode-427024-m02/id_rsa Username:docker}
	I0115 11:56:44.843292  225751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 11:56:44.857552  225751 status.go:257] multinode-427024-m02 status: &{Name:multinode-427024-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0115 11:56:44.857596  225751 status.go:255] checking status of multinode-427024-m03 ...
	I0115 11:56:44.858045  225751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 11:56:44.858102  225751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 11:56:44.873054  225751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35451
	I0115 11:56:44.873485  225751 main.go:141] libmachine: () Calling .GetVersion
	I0115 11:56:44.873932  225751 main.go:141] libmachine: Using API Version  1
	I0115 11:56:44.873952  225751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 11:56:44.874274  225751 main.go:141] libmachine: () Calling .GetMachineName
	I0115 11:56:44.874484  225751 main.go:141] libmachine: (multinode-427024-m03) Calling .GetState
	I0115 11:56:44.876081  225751 status.go:330] multinode-427024-m03 host status = "Stopped" (err=<nil>)
	I0115 11:56:44.876095  225751 status.go:343] host is not running, skipping remaining checks
	I0115 11:56:44.876100  225751 status.go:257] multinode-427024-m03 status: &{Name:multinode-427024-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (27.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-427024 node start m03 --alsologtostderr: (26.943237259s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (27.62s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (311.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-427024
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-427024
E0115 11:58:51.989352  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/ingress-addon-legacy-868201/client.crt: no such file or directory
E0115 11:59:19.671248  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/ingress-addon-legacy-868201/client.crt: no such file or directory
E0115 12:00:05.632684  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/client.crt: no such file or directory
multinode_test.go:318: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-427024: (3m4.67028417s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-427024 --wait=true -v=8 --alsologtostderr
E0115 12:01:05.039742  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/functional-997529/client.crt: no such file or directory
E0115 12:01:28.679783  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-427024 --wait=true -v=8 --alsologtostderr: (2m6.664271044s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-427024
--- PASS: TestMultiNode/serial/RestartKeepsNodes (311.46s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-427024 node delete m03: (1.250809703s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 status --alsologtostderr
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.81s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (183.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 stop
E0115 12:03:51.988597  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/ingress-addon-legacy-868201/client.crt: no such file or directory
E0115 12:05:05.633111  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/client.crt: no such file or directory
multinode_test.go:342: (dbg) Done: out/minikube-linux-amd64 -p multinode-427024 stop: (3m2.996728032s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-427024 status: exit status 7 (109.779876ms)

                                                
                                                
-- stdout --
	multinode-427024
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-427024-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-427024 status --alsologtostderr: exit status 7 (94.591296ms)

                                                
                                                
-- stdout --
	multinode-427024
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-427024-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 12:05:28.935131  227868 out.go:296] Setting OutFile to fd 1 ...
	I0115 12:05:28.935381  227868 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 12:05:28.935389  227868 out.go:309] Setting ErrFile to fd 2...
	I0115 12:05:28.935394  227868 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 12:05:28.935582  227868 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-203994/.minikube/bin
	I0115 12:05:28.935772  227868 out.go:303] Setting JSON to false
	I0115 12:05:28.935799  227868 mustload.go:65] Loading cluster: multinode-427024
	I0115 12:05:28.935850  227868 notify.go:220] Checking for updates...
	I0115 12:05:28.936194  227868 config.go:182] Loaded profile config "multinode-427024": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 12:05:28.936208  227868 status.go:255] checking status of multinode-427024 ...
	I0115 12:05:28.936636  227868 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 12:05:28.936695  227868 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 12:05:28.950966  227868 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46295
	I0115 12:05:28.951412  227868 main.go:141] libmachine: () Calling .GetVersion
	I0115 12:05:28.952097  227868 main.go:141] libmachine: Using API Version  1
	I0115 12:05:28.952123  227868 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 12:05:28.952442  227868 main.go:141] libmachine: () Calling .GetMachineName
	I0115 12:05:28.952657  227868 main.go:141] libmachine: (multinode-427024) Calling .GetState
	I0115 12:05:28.954442  227868 status.go:330] multinode-427024 host status = "Stopped" (err=<nil>)
	I0115 12:05:28.954456  227868 status.go:343] host is not running, skipping remaining checks
	I0115 12:05:28.954461  227868 status.go:257] multinode-427024 status: &{Name:multinode-427024 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 12:05:28.954485  227868 status.go:255] checking status of multinode-427024-m02 ...
	I0115 12:05:28.954812  227868 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0115 12:05:28.954859  227868 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0115 12:05:28.969361  227868 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44143
	I0115 12:05:28.969752  227868 main.go:141] libmachine: () Calling .GetVersion
	I0115 12:05:28.970152  227868 main.go:141] libmachine: Using API Version  1
	I0115 12:05:28.970175  227868 main.go:141] libmachine: () Calling .SetConfigRaw
	I0115 12:05:28.970434  227868 main.go:141] libmachine: () Calling .GetMachineName
	I0115 12:05:28.970684  227868 main.go:141] libmachine: (multinode-427024-m02) Calling .GetState
	I0115 12:05:28.972231  227868 status.go:330] multinode-427024-m02 host status = "Stopped" (err=<nil>)
	I0115 12:05:28.972246  227868 status.go:343] host is not running, skipping remaining checks
	I0115 12:05:28.972252  227868 status.go:257] multinode-427024-m02 status: &{Name:multinode-427024-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (183.20s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (94.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-427024 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0115 12:06:05.040433  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/functional-997529/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-427024 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m34.030502509s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-427024 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (94.63s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (49.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-427024
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-427024-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-427024-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (85.634018ms)

                                                
                                                
-- stdout --
	* [multinode-427024-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17957
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17957-203994/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-203994/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-427024-m02' is duplicated with machine name 'multinode-427024-m02' in profile 'multinode-427024'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-427024-m03 --driver=kvm2  --container-runtime=containerd
E0115 12:07:28.087703  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/functional-997529/client.crt: no such file or directory
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-427024-m03 --driver=kvm2  --container-runtime=containerd: (48.6031646s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-427024
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-427024: exit status 80 (251.061229ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-427024
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-427024-m03 already exists in multinode-427024-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-427024-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (49.83s)

                                                
                                    
x
+
TestPreload (277.45s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-591815 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0115 12:08:51.987798  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/ingress-addon-legacy-868201/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-591815 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (2m3.703718729s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-591815 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-591815 image pull gcr.io/k8s-minikube/busybox: (1.470489946s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-591815
E0115 12:10:05.633004  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/client.crt: no such file or directory
E0115 12:10:15.033151  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/ingress-addon-legacy-868201/client.crt: no such file or directory
E0115 12:11:05.039513  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/functional-997529/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-591815: (1m31.506627375s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-591815 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-591815 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (59.665335549s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-591815 image list
helpers_test.go:175: Cleaning up "test-preload-591815" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-591815
--- PASS: TestPreload (277.45s)

                                                
                                    
x
+
TestScheduledStopUnix (118.67s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-745558 --memory=2048 --driver=kvm2  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-745558 --memory=2048 --driver=kvm2  --container-runtime=containerd: (46.829661018s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-745558 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-745558 -n scheduled-stop-745558
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-745558 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-745558 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-745558 -n scheduled-stop-745558
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-745558
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-745558 --schedule 15s
E0115 12:13:51.988481  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/ingress-addon-legacy-868201/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-745558
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-745558: exit status 7 (81.805261ms)

                                                
                                                
-- stdout --
	scheduled-stop-745558
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-745558 -n scheduled-stop-745558
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-745558 -n scheduled-stop-745558: exit status 7 (76.074412ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-745558" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-745558
--- PASS: TestScheduledStopUnix (118.67s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (213.43s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3947715687 start -p running-upgrade-955192 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E0115 12:15:05.632665  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3947715687 start -p running-upgrade-955192 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m17.277866895s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-955192 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-955192 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m14.370197337s)
helpers_test.go:175: Cleaning up "running-upgrade-955192" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-955192
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-955192: (1.181054486s)
--- PASS: TestRunningBinaryUpgrade (213.43s)

                                                
                                    
x
+
TestKubernetesUpgrade (133.74s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-102409 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0115 12:26:55.033530  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/ingress-addon-legacy-868201/client.crt: no such file or directory
E0115 12:27:14.707404  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/old-k8s-version-204355/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-102409 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m7.035667223s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-102409
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-102409: (2.119486835s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-102409 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-102409 status --format={{.Host}}: exit status 7 (84.307305ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-102409 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0115 12:27:42.391216  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/old-k8s-version-204355/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-102409 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (40.017596058s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-102409 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-102409 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-102409 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (102.93516ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-102409] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17957
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17957-203994/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-203994/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-102409
	    minikube start -p kubernetes-upgrade-102409 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1024092 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-102409 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-102409 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-102409 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (23.276696436s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-102409" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-102409
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-102409: (1.044306638s)
--- PASS: TestKubernetesUpgrade (133.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-953906 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-953906 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (104.400316ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-953906] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17957
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17957-203994/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-203994/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (163.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-204355 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-204355 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m43.058667568s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (163.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (102.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-953906 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-953906 --driver=kvm2  --container-runtime=containerd: (1m42.231777765s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-953906 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (102.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-393084 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-393084 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (118.71344ms)

                                                
                                                
-- stdout --
	* [false-393084] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17957
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17957-203994/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-203994/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 12:15:42.530693  232416 out.go:296] Setting OutFile to fd 1 ...
	I0115 12:15:42.530866  232416 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 12:15:42.530877  232416 out.go:309] Setting ErrFile to fd 2...
	I0115 12:15:42.530885  232416 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 12:15:42.531116  232416 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-203994/.minikube/bin
	I0115 12:15:42.531722  232416 out.go:303] Setting JSON to false
	I0115 12:15:42.532713  232416 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":17895,"bootTime":1705303048,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0115 12:15:42.532814  232416 start.go:138] virtualization: kvm guest
	I0115 12:15:42.535505  232416 out.go:177] * [false-393084] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0115 12:15:42.537181  232416 out.go:177]   - MINIKUBE_LOCATION=17957
	I0115 12:15:42.537189  232416 notify.go:220] Checking for updates...
	I0115 12:15:42.538629  232416 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 12:15:42.540125  232416 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17957-203994/kubeconfig
	I0115 12:15:42.541601  232416 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-203994/.minikube
	I0115 12:15:42.543153  232416 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0115 12:15:42.544721  232416 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 12:15:42.546635  232416 config.go:182] Loaded profile config "NoKubernetes-953906": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 12:15:42.546746  232416 config.go:182] Loaded profile config "old-k8s-version-204355": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0115 12:15:42.546817  232416 config.go:182] Loaded profile config "running-upgrade-955192": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.1
	I0115 12:15:42.546895  232416 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 12:15:42.583760  232416 out.go:177] * Using the kvm2 driver based on user configuration
	I0115 12:15:42.585257  232416 start.go:298] selected driver: kvm2
	I0115 12:15:42.585269  232416 start.go:902] validating driver "kvm2" against <nil>
	I0115 12:15:42.585282  232416 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 12:15:42.587570  232416 out.go:177] 
	W0115 12:15:42.588862  232416 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0115 12:15:42.590161  232416 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-393084 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-393084

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-393084

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-393084

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-393084

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-393084

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-393084

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-393084

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-393084

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-393084

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-393084

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393084"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393084"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393084"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-393084

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393084"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393084"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-393084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-393084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-393084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-393084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-393084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-393084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-393084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-393084" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393084"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393084"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393084"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393084"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393084"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-393084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-393084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-393084" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393084"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393084"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393084"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393084"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393084"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-393084

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393084"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393084"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393084"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393084"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393084"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393084"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393084"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393084"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393084"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393084"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393084"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393084"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393084"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393084"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393084"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393084"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393084"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393084"

                                                
                                                
----------------------- debugLogs end: false-393084 [took: 3.248110333s] --------------------------------
helpers_test.go:175: Cleaning up "false-393084" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-393084
--- PASS: TestNetworkPlugins/group/false (3.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (48.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-953906 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-953906 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (46.884192487s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-953906 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-953906 status -o json: exit status 2 (300.862141ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-953906","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-953906
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-953906: (1.164769164s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (48.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-953906 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-953906 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (28.074975361s)
--- PASS: TestNoKubernetes/serial/Start (28.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-204355 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2ec7038e-3d51-47e6-98f1-fb3b401dd197] Pending
helpers_test.go:344: "busybox" [2ec7038e-3d51-47e6-98f1-fb3b401dd197] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2ec7038e-3d51-47e6-98f1-fb3b401dd197] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.00333488s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-204355 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-204355 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-204355 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (91.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-204355 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-204355 --alsologtostderr -v=3: (1m31.916556652s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (91.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-953906 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-953906 "sudo systemctl is-active --quiet service kubelet": exit status 1 (225.862827ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (71.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (1m7.660340589s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.448581468s)
--- PASS: TestNoKubernetes/serial/ProfileList (71.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-953906
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-953906: (1.247785482s)
--- PASS: TestNoKubernetes/serial/Stop (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (39.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-953906 --driver=kvm2  --container-runtime=containerd
E0115 12:18:51.986691  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/ingress-addon-legacy-868201/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-953906 --driver=kvm2  --container-runtime=containerd: (39.102890647s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (39.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-204355 -n old-k8s-version-204355
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-204355 -n old-k8s-version-204355: exit status 7 (109.071509ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-204355 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (139.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-204355 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-204355 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m19.406828609s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-204355 -n old-k8s-version-204355
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (139.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (137.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-261870 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-261870 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (2m17.945365812s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (137.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-953906 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-953906 "sudo systemctl is-active --quiet service kubelet": exit status 1 (228.017677ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (145.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-519630 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
E0115 12:20:05.633118  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/client.crt: no such file or directory
E0115 12:21:05.039342  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/functional-997529/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-519630 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (2m25.486002526s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (145.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-76cnj" [c2c7c147-9a46-4111-857d-519a2612ff36] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-76cnj" [c2c7c147-9a46-4111-857d-519a2612ff36] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.004684096s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-76cnj" [c2c7c147-9a46-4111-857d-519a2612ff36] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004929791s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-204355 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-204355 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-204355 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-204355 -n old-k8s-version-204355
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-204355 -n old-k8s-version-204355: exit status 2 (267.058803ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-204355 -n old-k8s-version-204355
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-204355 -n old-k8s-version-204355: exit status 2 (268.563159ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-204355 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-204355 -n old-k8s-version-204355
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-204355 -n old-k8s-version-204355
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.67s)

                                                
                                    
x
+
TestPause/serial/Start (104.17s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-655629 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-655629 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m44.17134831s)
--- PASS: TestPause/serial/Start (104.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-261870 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fddd5473-9881-4db5-8f01-f105469c6c4c] Pending
helpers_test.go:344: "busybox" [fddd5473-9881-4db5-8f01-f105469c6c4c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fddd5473-9881-4db5-8f01-f105469c6c4c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.00529227s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-261870 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-261870 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-261870 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.151999472s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-261870 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-519630 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [35a9e78f-4142-47e6-8dfe-88f46d6d1e59] Pending
helpers_test.go:344: "busybox" [35a9e78f-4142-47e6-8dfe-88f46d6d1e59] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [35a9e78f-4142-47e6-8dfe-88f46d6d1e59] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.00464918s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-519630 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-261870 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-261870 --alsologtostderr -v=3: (1m31.821512839s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-519630 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-519630 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.186464812s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-519630 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-519630 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-519630 --alsologtostderr -v=3: (1m31.903156424s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (76.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-134051 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
E0115 12:22:14.707532  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/old-k8s-version-204355/client.crt: no such file or directory
E0115 12:22:14.712755  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/old-k8s-version-204355/client.crt: no such file or directory
E0115 12:22:14.722998  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/old-k8s-version-204355/client.crt: no such file or directory
E0115 12:22:14.743256  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/old-k8s-version-204355/client.crt: no such file or directory
E0115 12:22:14.783619  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/old-k8s-version-204355/client.crt: no such file or directory
E0115 12:22:14.864020  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/old-k8s-version-204355/client.crt: no such file or directory
E0115 12:22:15.025116  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/old-k8s-version-204355/client.crt: no such file or directory
E0115 12:22:15.345815  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/old-k8s-version-204355/client.crt: no such file or directory
E0115 12:22:15.986878  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/old-k8s-version-204355/client.crt: no such file or directory
E0115 12:22:17.267424  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/old-k8s-version-204355/client.crt: no such file or directory
E0115 12:22:19.827699  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/old-k8s-version-204355/client.crt: no such file or directory
E0115 12:22:24.948489  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/old-k8s-version-204355/client.crt: no such file or directory
E0115 12:22:35.189529  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/old-k8s-version-204355/client.crt: no such file or directory
E0115 12:22:55.669746  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/old-k8s-version-204355/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-134051 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m16.222446641s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (76.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-261870 -n no-preload-261870
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-261870 -n no-preload-261870: exit status 7 (86.686639ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-261870 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (312.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-261870 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-261870 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (5m12.380906682s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-261870 -n no-preload-261870
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (312.69s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (26.45s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-655629 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-655629 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (26.435061349s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (26.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-134051 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c600e13a-ea7d-40c7-9e99-828fb98c62cd] Pending
helpers_test.go:344: "busybox" [c600e13a-ea7d-40c7-9e99-828fb98c62cd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c600e13a-ea7d-40c7-9e99-828fb98c62cd] Running
E0115 12:23:36.630533  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/old-k8s-version-204355/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.005867305s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-134051 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-519630 -n embed-certs-519630
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-519630 -n embed-certs-519630: exit status 7 (86.791998ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-519630 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (592.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-519630 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-519630 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (9m52.58313462s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-519630 -n embed-certs-519630
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (592.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-134051 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-134051 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-134051 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-134051 --alsologtostderr -v=3: (1m31.823843596s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.82s)

                                                
                                    
x
+
TestPause/serial/Pause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-655629 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.67s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-655629 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-655629 --output=json --layout=cluster: exit status 2 (277.520861ms)

                                                
                                                
-- stdout --
	{"Name":"pause-655629","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-655629","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.28s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-655629 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.64s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.76s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-655629 --alsologtostderr -v=5
E0115 12:23:51.986087  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/ingress-addon-legacy-868201/client.crt: no such file or directory
--- PASS: TestPause/serial/PauseAgain (0.76s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.06s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-655629 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-655629 --alsologtostderr -v=5: (1.062220038s)
--- PASS: TestPause/serial/DeletePaused (1.06s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (34.91s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0115 12:24:08.088351  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/functional-997529/client.crt: no such file or directory
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (34.907454715s)
--- PASS: TestPause/serial/VerifyDeletedResources (34.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (61.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-272765 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0115 12:24:58.550951  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/old-k8s-version-204355/client.crt: no such file or directory
E0115 12:25:05.632851  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-272765 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (1m1.529299231s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (61.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-134051 -n default-k8s-diff-port-134051
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-134051 -n default-k8s-diff-port-134051: exit status 7 (96.450782ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-134051 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (335.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-134051 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-134051 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (5m35.172211259s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-134051 -n default-k8s-diff-port-134051
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (335.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-272765 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-272765 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.380070213s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-272765 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-272765 --alsologtostderr -v=3: (2.117294183s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-272765 -n newest-cni-272765
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-272765 -n newest-cni-272765: exit status 7 (92.955898ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-272765 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (48.67s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-272765 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0115 12:26:05.039498  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/functional-997529/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-272765 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (48.39973298s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-272765 -n newest-cni-272765
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (48.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-272765 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-272765 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-272765 -n newest-cni-272765
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-272765 -n newest-cni-272765: exit status 2 (257.765981ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-272765 -n newest-cni-272765
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-272765 -n newest-cni-272765: exit status 2 (266.946958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-272765 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-272765 -n newest-cni-272765
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-272765 -n newest-cni-272765
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nwgct" [a8cacb02-66a9-461e-b3f3-bedf7a9ff866] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006180108s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nwgct" [a8cacb02-66a9-461e-b3f3-bedf7a9ff866] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005241604s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-261870 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.38s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (119.75s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.19108875 start -p stopped-upgrade-766986 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.19108875 start -p stopped-upgrade-766986 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (57.502367612s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.19108875 -p stopped-upgrade-766986 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.19108875 -p stopped-upgrade-766986 stop: (1.471955858s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-766986 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0115 12:30:05.632996  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-766986 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m0.77783306s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (119.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-261870 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-261870 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-261870 -n no-preload-261870
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-261870 -n no-preload-261870: exit status 2 (304.043815ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-261870 -n no-preload-261870
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-261870 -n no-preload-261870: exit status 2 (299.808213ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-261870 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-261870 -n no-preload-261870
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-261870 -n no-preload-261870
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (89.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-393084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
E0115 12:28:51.985930  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/ingress-addon-legacy-868201/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-393084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (1m29.131172864s)
--- PASS: TestNetworkPlugins/group/auto/Start (89.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-393084 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-393084 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2vw9b" [5d60e720-a730-4f53-9476-73c93c9920cf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-2vw9b" [5d60e720-a730-4f53-9476-73c93c9920cf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004671426s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-393084 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-393084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-393084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-766986
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-766986: (1.011505729s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (83.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-393084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-393084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m23.032035192s)
--- PASS: TestNetworkPlugins/group/flannel/Start (83.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (87.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-393084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-393084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m27.323404382s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (87.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8l2nb" [aac95242-6421-4e3b-bac8-aa06cb7c5b20] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005431828s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8l2nb" [aac95242-6421-4e3b-bac8-aa06cb7c5b20] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00429317s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-134051 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-134051 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-134051 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-134051 -n default-k8s-diff-port-134051
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-134051 -n default-k8s-diff-port-134051: exit status 2 (280.11503ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-134051 -n default-k8s-diff-port-134051
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-134051 -n default-k8s-diff-port-134051: exit status 2 (278.758393ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-134051 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-134051 -n default-k8s-diff-port-134051
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-134051 -n default-k8s-diff-port-134051
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (122.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-393084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
E0115 12:31:05.040034  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/functional-997529/client.crt: no such file or directory
E0115 12:31:39.661564  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/no-preload-261870/client.crt: no such file or directory
E0115 12:31:39.666915  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/no-preload-261870/client.crt: no such file or directory
E0115 12:31:39.677263  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/no-preload-261870/client.crt: no such file or directory
E0115 12:31:39.697709  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/no-preload-261870/client.crt: no such file or directory
E0115 12:31:39.738107  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/no-preload-261870/client.crt: no such file or directory
E0115 12:31:39.818430  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/no-preload-261870/client.crt: no such file or directory
E0115 12:31:39.978916  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/no-preload-261870/client.crt: no such file or directory
E0115 12:31:40.300087  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/no-preload-261870/client.crt: no such file or directory
E0115 12:31:40.941047  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/no-preload-261870/client.crt: no such file or directory
E0115 12:31:42.222097  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/no-preload-261870/client.crt: no such file or directory
E0115 12:31:44.782305  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/no-preload-261870/client.crt: no such file or directory
E0115 12:31:49.903014  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/no-preload-261870/client.crt: no such file or directory
E0115 12:32:00.143480  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/no-preload-261870/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-393084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (2m2.613721976s)
--- PASS: TestNetworkPlugins/group/bridge/Start (122.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-85sdk" [6bdb7f4b-a6da-4173-a891-c20e4299b786] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005936126s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-393084 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-393084 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rg2g5" [83a8cfb0-c576-4ecc-899d-bf004e32a82c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-rg2g5" [83a8cfb0-c576-4ecc-899d-bf004e32a82c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.005735166s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-393084 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-393084 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hhz8j" [8b244caa-620d-4eb3-a156-8a7693e3506b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0115 12:32:14.707107  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/old-k8s-version-204355/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-hhz8j" [8b244caa-620d-4eb3-a156-8a7693e3506b] Running
E0115 12:32:20.624364  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/no-preload-261870/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.005125662s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-393084 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-393084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-393084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-393084 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-393084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-393084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (97.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-393084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-393084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m37.749397643s)
--- PASS: TestNetworkPlugins/group/calico/Start (97.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (98.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-393084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
E0115 12:33:01.584753  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/no-preload-261870/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-393084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m38.430213769s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (98.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-393084 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-393084 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xrrh7" [93ed4b6d-876a-4e0b-aaab-1c43be7eb2ef] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-xrrh7" [93ed4b6d-876a-4e0b-aaab-1c43be7eb2ef] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.116071725s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-393084 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-393084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-393084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-g4gj4" [7f5a3744-d3f0-4d95-a209-1b298783d78a] Running
E0115 12:33:30.382380  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/default-k8s-diff-port-134051/client.crt: no such file or directory
E0115 12:33:30.387698  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/default-k8s-diff-port-134051/client.crt: no such file or directory
E0115 12:33:30.398510  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/default-k8s-diff-port-134051/client.crt: no such file or directory
E0115 12:33:30.419397  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/default-k8s-diff-port-134051/client.crt: no such file or directory
E0115 12:33:30.459738  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/default-k8s-diff-port-134051/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004831762s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-g4gj4" [7f5a3744-d3f0-4d95-a209-1b298783d78a] Running
E0115 12:33:30.540814  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/default-k8s-diff-port-134051/client.crt: no such file or directory
E0115 12:33:30.701617  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/default-k8s-diff-port-134051/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004575787s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-519630 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
E0115 12:33:35.508725  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/default-k8s-diff-port-134051/client.crt: no such file or directory
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (90.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-393084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
E0115 12:33:32.948271  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/default-k8s-diff-port-134051/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-393084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m30.790203746s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (90.79s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-519630 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-519630 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-519630 -n embed-certs-519630
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-519630 -n embed-certs-519630: exit status 2 (372.743371ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-519630 -n embed-certs-519630
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-519630 -n embed-certs-519630: exit status 2 (346.339129ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-519630 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-519630 -n embed-certs-519630
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-519630 -n embed-certs-519630
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.46s)
E0115 12:33:50.869825  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/default-k8s-diff-port-134051/client.crt: no such file or directory
E0115 12:33:51.986457  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/ingress-addon-legacy-868201/client.crt: no such file or directory
E0115 12:34:11.350379  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/default-k8s-diff-port-134051/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-7xgdk" [6274edf7-27ae-4a1c-a704-9743a585967d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006118152s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-6j67j" [13197592-4a0e-4b78-b8f2-ce54e1c3edbd] Running
E0115 12:34:23.504980  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/no-preload-261870/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005280776s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-393084 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-393084 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-q4wlk" [1f7d4b28-ba3e-4136-8a9a-37a338e8e944] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-q4wlk" [1f7d4b28-ba3e-4136-8a9a-37a338e8e944] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.00410181s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-393084 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-393084 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7h6sh" [9317d3d4-4022-44ac-aaa8-f74084fd3726] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-7h6sh" [9317d3d4-4022-44ac-aaa8-f74084fd3726] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.005060821s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-393084 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-393084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-393084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-393084 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-393084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-393084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-393084 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-393084 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-q9xlh" [2f129e8f-b760-4a63-907e-b9662738bd3e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0115 12:35:05.633684  211370 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-203994/.minikube/profiles/addons-431563/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-q9xlh" [2f129e8f-b760-4a63-907e-b9662738bd3e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004794756s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-393084 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-393084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-393084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    

Test skip (39/318)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
14 TestDownloadOnly/v1.28.4/cached-images 0
15 TestDownloadOnly/v1.28.4/binaries 0
16 TestDownloadOnly/v1.28.4/kubectl 0
23 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
24 TestDownloadOnly/v1.29.0-rc.2/binaries 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
133 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
134 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
135 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
136 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
137 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
138 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
139 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
140 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
163 TestImageBuild 0
196 TestKicCustomNetwork 0
197 TestKicExistingNetwork 0
198 TestKicCustomSubnet 0
199 TestKicStaticIP 0
231 TestChangeNoneUser 0
234 TestScheduledStopWindows 0
236 TestSkaffold 0
238 TestInsufficientStorage 0
242 TestMissingContainerUpgrade 0
250 TestStartStop/group/disable-driver-mounts 0.17
257 TestNetworkPlugins/group/kubenet 3.64
265 TestNetworkPlugins/group/cilium 3.9
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-835455" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-835455
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-393084 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-393084

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-393084

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-393084

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-393084

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-393084

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-393084

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-393084

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-393084

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-393084

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-393084

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393084"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393084"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393084"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-393084

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393084"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393084"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-393084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-393084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-393084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-393084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-393084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-393084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-393084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-393084" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393084"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393084"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393084"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393084"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393084"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-393084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-393084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-393084" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393084"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393084"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393084"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393084"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393084"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-393084

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393084"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393084"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393084"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393084"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393084"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393084"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393084"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393084"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393084"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393084"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393084"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393084"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393084"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393084"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393084"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393084"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393084"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393084"

                                                
                                                
----------------------- debugLogs end: kubenet-393084 [took: 3.479677991s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-393084" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-393084
--- SKIP: TestNetworkPlugins/group/kubenet (3.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-393084 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-393084

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-393084

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-393084

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-393084

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-393084

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-393084

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-393084

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-393084

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-393084

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-393084

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393084"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393084"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393084"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-393084

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393084"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393084"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-393084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-393084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-393084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-393084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-393084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-393084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-393084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-393084" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393084"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393084"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393084"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393084"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393084"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-393084

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-393084

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-393084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-393084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-393084

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-393084

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-393084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-393084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-393084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-393084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-393084" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393084"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393084"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393084"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393084"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393084"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-393084

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393084"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393084"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393084"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393084"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393084"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393084"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393084"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393084"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393084"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393084"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393084"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393084"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393084"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393084"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393084"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393084"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393084"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-393084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393084"

                                                
                                                
----------------------- debugLogs end: cilium-393084 [took: 3.71808641s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-393084" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-393084
--- SKIP: TestNetworkPlugins/group/cilium (3.90s)

                                                
                                    
Copied to clipboard