Test Report: KVM_Linux_containerd 18644

                    
                      382efc9ec0890000466ab6258d7a89af3764444c:2024-04-15:34035
                    
                

Test fail (1/333)

Order failed test Duration
44 TestAddons/parallel/CSI 54.63
x
+
TestAddons/parallel/CSI (54.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 22.929738ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-316289 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-316289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-316289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-316289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-316289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-316289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-316289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-316289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-316289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-316289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-316289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-316289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-316289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-316289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-316289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-316289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-316289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-316289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-316289 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-316289 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-316289 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [dc4307be-0a3d-42f5-b321-7bc83b7cdf60] Pending
helpers_test.go:344: "task-pv-pod" [dc4307be-0a3d-42f5-b321-7bc83b7cdf60] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [dc4307be-0a3d-42f5-b321-7bc83b7cdf60] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.004398904s
addons_test.go:584: (dbg) Run:  kubectl --context addons-316289 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-316289 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-316289 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-316289 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-316289 delete pod task-pv-pod: (1.314671476s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-316289 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-316289 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-316289 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-316289 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-316289 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-316289 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-316289 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-316289 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-316289 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-316289 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [62d9add5-c60a-4339-bd52-28f421976fdf] Pending
helpers_test.go:344: "task-pv-pod-restore" [62d9add5-c60a-4339-bd52-28f421976fdf] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [62d9add5-c60a-4339-bd52-28f421976fdf] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.004265005s
addons_test.go:626: (dbg) Run:  kubectl --context addons-316289 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-316289 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-316289 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-316289 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-316289 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (296.238153ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 11:19:25.002029  365391 out.go:291] Setting OutFile to fd 1 ...
	I0415 11:19:25.002185  365391 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:19:25.002200  365391 out.go:304] Setting ErrFile to fd 2...
	I0415 11:19:25.002207  365391 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:19:25.002510  365391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18644-354432/.minikube/bin
	I0415 11:19:25.002899  365391 mustload.go:65] Loading cluster: addons-316289
	I0415 11:19:25.003439  365391 config.go:182] Loaded profile config "addons-316289": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0415 11:19:25.003476  365391 addons.go:597] checking whether the cluster is paused
	I0415 11:19:25.003665  365391 config.go:182] Loaded profile config "addons-316289": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0415 11:19:25.003689  365391 host.go:66] Checking if "addons-316289" exists ...
	I0415 11:19:25.004099  365391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:19:25.004153  365391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:19:25.019958  365391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45459
	I0415 11:19:25.020497  365391 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:19:25.021082  365391 main.go:141] libmachine: Using API Version  1
	I0415 11:19:25.021109  365391 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:19:25.021517  365391 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:19:25.021758  365391 main.go:141] libmachine: (addons-316289) Calling .GetState
	I0415 11:19:25.023794  365391 main.go:141] libmachine: (addons-316289) Calling .DriverName
	I0415 11:19:25.024031  365391 ssh_runner.go:195] Run: systemctl --version
	I0415 11:19:25.024060  365391 main.go:141] libmachine: (addons-316289) Calling .GetSSHHostname
	I0415 11:19:25.026422  365391 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:19:25.026847  365391 main.go:141] libmachine: (addons-316289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:92:2f", ip: ""} in network mk-addons-316289: {Iface:virbr1 ExpiryTime:2024-04-15 12:16:28 +0000 UTC Type:0 Mac:52:54:00:f9:92:2f Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-316289 Clientid:01:52:54:00:f9:92:2f}
	I0415 11:19:25.026878  365391 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined IP address 192.168.39.62 and MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:19:25.027015  365391 main.go:141] libmachine: (addons-316289) Calling .GetSSHPort
	I0415 11:19:25.027229  365391 main.go:141] libmachine: (addons-316289) Calling .GetSSHKeyPath
	I0415 11:19:25.027405  365391 main.go:141] libmachine: (addons-316289) Calling .GetSSHUsername
	I0415 11:19:25.027576  365391 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18644-354432/.minikube/machines/addons-316289/id_rsa Username:docker}
	I0415 11:19:25.117399  365391 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0415 11:19:25.117515  365391 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0415 11:19:25.179232  365391 cri.go:89] found id: "b3d292c00e4987450876d7962dd01e104a6ae5221b4c8356d63b5a5d19b7d391"
	I0415 11:19:25.179262  365391 cri.go:89] found id: "048d53ff711944bb558d9887eb3706c4d928d74ba4a96ba6b8f8fb6c45f8e0f1"
	I0415 11:19:25.179268  365391 cri.go:89] found id: "22ebf5d2a3726dd06fafaa7c3a2649590cc3e65caa73130e328df976b2784a11"
	I0415 11:19:25.179273  365391 cri.go:89] found id: "d83c73de75ac6aa566ca86dda8862b729cd62840eca87a92a1c6b1cdaee199c1"
	I0415 11:19:25.179277  365391 cri.go:89] found id: "e4b87d98093e550431157a5ff91a0bc01944ffff350af9b8fd41513c4b296769"
	I0415 11:19:25.179285  365391 cri.go:89] found id: "67f3c1b2d8d31a81ae51f22690af4c6ab445acdcfa3f27dcd32a7f04e10d3f4a"
	I0415 11:19:25.179289  365391 cri.go:89] found id: "1f20af26507c1a5a459ebd72fe6ac9ceefd157f90afaf68cb0fc192523ee75d8"
	I0415 11:19:25.179294  365391 cri.go:89] found id: "f2a84dfde4e231c710d1ebc87f5e999dc426d97a351bd8ab37d42ac8cedb9239"
	I0415 11:19:25.179298  365391 cri.go:89] found id: "d37a7ef2f317c6c29b7fe7ca638f952abd6771d6cd824fbf63711f740f28f5db"
	I0415 11:19:25.179304  365391 cri.go:89] found id: "14f43eb6b0858134fbaa6dd7d8261fdd40511c79da8f988e2d2760cb9cc27ed4"
	I0415 11:19:25.179308  365391 cri.go:89] found id: "b05e5a105691ac33bdf0dc66e1b2d03989fb1c3e8ebec73a39dedc0c29bf8503"
	I0415 11:19:25.179313  365391 cri.go:89] found id: "cdb092eabe4799c691409c0c6dffd87d67193433c3d4a45ac24f73c9b290e71e"
	I0415 11:19:25.179317  365391 cri.go:89] found id: "03e840b0b514d80222cb65d0163daeea33aeda87aeb65322b9eedcd42c9828fd"
	I0415 11:19:25.179321  365391 cri.go:89] found id: "9d298ffd6ee8a13efb60cef702c02b220e5ed04c83c57cf60ab4a948fdc62715"
	I0415 11:19:25.179329  365391 cri.go:89] found id: "07942458e669535ca519d59108d32f73bd83a4f2066a5eefedb681719de02ee7"
	I0415 11:19:25.179336  365391 cri.go:89] found id: "cdafb6ce262007a07aaa23d0e5bee974bb1608e84ec9c3db4de6eca4e1595d6a"
	I0415 11:19:25.179341  365391 cri.go:89] found id: "1a3ceb1d5cb96e462f690d4cbac2b1646228fb6b77b81f086b7b44ded568a769"
	I0415 11:19:25.179348  365391 cri.go:89] found id: "f39cc7bdef68cb7b6b9f1022f37dd71c9db422929c6c3dc5f7b65d8f46c3f4fd"
	I0415 11:19:25.179356  365391 cri.go:89] found id: ""
	I0415 11:19:25.179417  365391 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0415 11:19:25.218408  365391 main.go:141] libmachine: Making call to close driver server
	I0415 11:19:25.218436  365391 main.go:141] libmachine: (addons-316289) Calling .Close
	I0415 11:19:25.218791  365391 main.go:141] libmachine: Successfully made call to close driver server
	I0415 11:19:25.218812  365391 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 11:19:25.221771  365391 out.go:177] 
	W0415 11:19:25.223367  365391 out.go:239] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-15T11:19:25Z" level=error msg="stat /run/containerd/runc/k8s.io/2baf6d4233c73b4fb4d4f9c64c1292faf62effbf52bf256e7f29bb69146346a6: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-15T11:19:25Z" level=error msg="stat /run/containerd/runc/k8s.io/2baf6d4233c73b4fb4d4f9c64c1292faf62effbf52bf256e7f29bb69146346a6: no such file or directory"
	
	W0415 11:19:25.223381  365391 out.go:239] * 
	* 
	W0415 11:19:25.226109  365391 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 11:19:25.227789  365391 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:640: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-316289 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-316289 addons disable volumesnapshots --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-316289 -n addons-316289
helpers_test.go:244: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-316289 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-316289 logs -n 25: (1.383128924s)
helpers_test.go:252: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p download-only-068316                                                                     | download-only-068316 | jenkins | v1.33.0-beta.0 | 15 Apr 24 11:16 UTC | 15 Apr 24 11:16 UTC |
	| delete  | -p download-only-974926                                                                     | download-only-974926 | jenkins | v1.33.0-beta.0 | 15 Apr 24 11:16 UTC | 15 Apr 24 11:16 UTC |
	| delete  | -p download-only-052198                                                                     | download-only-052198 | jenkins | v1.33.0-beta.0 | 15 Apr 24 11:16 UTC | 15 Apr 24 11:16 UTC |
	| delete  | -p download-only-068316                                                                     | download-only-068316 | jenkins | v1.33.0-beta.0 | 15 Apr 24 11:16 UTC | 15 Apr 24 11:16 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-720489 | jenkins | v1.33.0-beta.0 | 15 Apr 24 11:16 UTC |                     |
	|         | binary-mirror-720489                                                                        |                      |         |                |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |                |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |                |                     |                     |
	|         | http://127.0.0.1:42033                                                                      |                      |         |                |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |                |                     |                     |
	| delete  | -p binary-mirror-720489                                                                     | binary-mirror-720489 | jenkins | v1.33.0-beta.0 | 15 Apr 24 11:16 UTC | 15 Apr 24 11:16 UTC |
	| addons  | disable dashboard -p                                                                        | addons-316289        | jenkins | v1.33.0-beta.0 | 15 Apr 24 11:16 UTC |                     |
	|         | addons-316289                                                                               |                      |         |                |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-316289        | jenkins | v1.33.0-beta.0 | 15 Apr 24 11:16 UTC |                     |
	|         | addons-316289                                                                               |                      |         |                |                     |                     |
	| start   | -p addons-316289 --wait=true                                                                | addons-316289        | jenkins | v1.33.0-beta.0 | 15 Apr 24 11:16 UTC | 15 Apr 24 11:18 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |                |                     |                     |
	|         | --addons=registry                                                                           |                      |         |                |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |                |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |                |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |                |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |                |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |                |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |                |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |                |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |                |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |                |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-316289        | jenkins | v1.33.0-beta.0 | 15 Apr 24 11:18 UTC | 15 Apr 24 11:18 UTC |
	|         | addons-316289                                                                               |                      |         |                |                     |                     |
	| ssh     | addons-316289 ssh cat                                                                       | addons-316289        | jenkins | v1.33.0-beta.0 | 15 Apr 24 11:18 UTC | 15 Apr 24 11:18 UTC |
	|         | /opt/local-path-provisioner/pvc-eeb96ba2-dac8-4abf-bab1-600492ef2421_default_test-pvc/file1 |                      |         |                |                     |                     |
	| addons  | addons-316289 addons disable                                                                | addons-316289        | jenkins | v1.33.0-beta.0 | 15 Apr 24 11:18 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | addons-316289 addons disable                                                                | addons-316289        | jenkins | v1.33.0-beta.0 | 15 Apr 24 11:18 UTC | 15 Apr 24 11:18 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| ip      | addons-316289 ip                                                                            | addons-316289        | jenkins | v1.33.0-beta.0 | 15 Apr 24 11:18 UTC | 15 Apr 24 11:18 UTC |
	| addons  | addons-316289 addons disable                                                                | addons-316289        | jenkins | v1.33.0-beta.0 | 15 Apr 24 11:18 UTC | 15 Apr 24 11:18 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| addons  | addons-316289 addons                                                                        | addons-316289        | jenkins | v1.33.0-beta.0 | 15 Apr 24 11:18 UTC | 15 Apr 24 11:18 UTC |
	|         | disable metrics-server                                                                      |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-316289        | jenkins | v1.33.0-beta.0 | 15 Apr 24 11:18 UTC | 15 Apr 24 11:19 UTC |
	|         | addons-316289                                                                               |                      |         |                |                     |                     |
	| addons  | enable headlamp                                                                             | addons-316289        | jenkins | v1.33.0-beta.0 | 15 Apr 24 11:19 UTC | 15 Apr 24 11:19 UTC |
	|         | -p addons-316289                                                                            |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| ssh     | addons-316289 ssh curl -s                                                                   | addons-316289        | jenkins | v1.33.0-beta.0 | 15 Apr 24 11:19 UTC | 15 Apr 24 11:19 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |                |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |                |                     |                     |
	| ip      | addons-316289 ip                                                                            | addons-316289        | jenkins | v1.33.0-beta.0 | 15 Apr 24 11:19 UTC | 15 Apr 24 11:19 UTC |
	| addons  | addons-316289 addons disable                                                                | addons-316289        | jenkins | v1.33.0-beta.0 | 15 Apr 24 11:19 UTC | 15 Apr 24 11:19 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| addons  | addons-316289 addons disable                                                                | addons-316289        | jenkins | v1.33.0-beta.0 | 15 Apr 24 11:19 UTC | 15 Apr 24 11:19 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |                |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-316289        | jenkins | v1.33.0-beta.0 | 15 Apr 24 11:19 UTC | 15 Apr 24 11:19 UTC |
	|         | -p addons-316289                                                                            |                      |         |                |                     |                     |
	| addons  | addons-316289 addons                                                                        | addons-316289        | jenkins | v1.33.0-beta.0 | 15 Apr 24 11:19 UTC |                     |
	|         | disable csi-hostpath-driver                                                                 |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | addons-316289 addons                                                                        | addons-316289        | jenkins | v1.33.0-beta.0 | 15 Apr 24 11:19 UTC | 15 Apr 24 11:19 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 11:16:13
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 11:16:13.164623  362872 out.go:291] Setting OutFile to fd 1 ...
	I0415 11:16:13.164761  362872 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:16:13.164771  362872 out.go:304] Setting ErrFile to fd 2...
	I0415 11:16:13.164776  362872 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:16:13.165017  362872 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18644-354432/.minikube/bin
	I0415 11:16:13.165676  362872 out.go:298] Setting JSON to false
	I0415 11:16:13.166605  362872 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3516,"bootTime":1713176257,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0415 11:16:13.166674  362872 start.go:139] virtualization: kvm guest
	I0415 11:16:13.168943  362872 out.go:177] * [addons-316289] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0415 11:16:13.170292  362872 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 11:16:13.171571  362872 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 11:16:13.170315  362872 notify.go:220] Checking for updates...
	I0415 11:16:13.174036  362872 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18644-354432/kubeconfig
	I0415 11:16:13.175397  362872 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18644-354432/.minikube
	I0415 11:16:13.176666  362872 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0415 11:16:13.178019  362872 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 11:16:13.179528  362872 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 11:16:13.211209  362872 out.go:177] * Using the kvm2 driver based on user configuration
	I0415 11:16:13.212630  362872 start.go:297] selected driver: kvm2
	I0415 11:16:13.212655  362872 start.go:901] validating driver "kvm2" against <nil>
	I0415 11:16:13.212672  362872 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 11:16:13.213782  362872 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 11:16:13.213872  362872 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18644-354432/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0415 11:16:13.228978  362872 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0415 11:16:13.229034  362872 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 11:16:13.229261  362872 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 11:16:13.229327  362872 cni.go:84] Creating CNI manager for ""
	I0415 11:16:13.229341  362872 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0415 11:16:13.229348  362872 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 11:16:13.229396  362872 start.go:340] cluster config:
	{Name:addons-316289 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-316289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
ontainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 11:16:13.229495  362872 iso.go:125] acquiring lock: {Name:mk9a0fa1d69df45a672e90a0ca39f76901edf3d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 11:16:13.231669  362872 out.go:177] * Starting "addons-316289" primary control-plane node in "addons-316289" cluster
	I0415 11:16:13.233201  362872 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0415 11:16:13.233247  362872 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18644-354432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4
	I0415 11:16:13.233259  362872 cache.go:56] Caching tarball of preloaded images
	I0415 11:16:13.233342  362872 preload.go:173] Found /home/jenkins/minikube-integration/18644-354432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 11:16:13.233355  362872 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on containerd
	I0415 11:16:13.233675  362872 profile.go:143] Saving config to /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/config.json ...
	I0415 11:16:13.233701  362872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/config.json: {Name:mk6ba8e6051e7eae8ac366867f867b9bca01b01e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 11:16:13.233846  362872 start.go:360] acquireMachinesLock for addons-316289: {Name:mk6bc76f64fea645a5d6d3c21cc588a5afcffe90 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 11:16:13.233892  362872 start.go:364] duration metric: took 31.441µs to acquireMachinesLock for "addons-316289"
	I0415 11:16:13.233909  362872 start.go:93] Provisioning new machine with config: &{Name:addons-316289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:addons-316289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0415 11:16:13.233968  362872 start.go:125] createHost starting for "" (driver="kvm2")
	I0415 11:16:13.235861  362872 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0415 11:16:13.236073  362872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:16:13.236117  362872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:16:13.250568  362872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32869
	I0415 11:16:13.251175  362872 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:16:13.251908  362872 main.go:141] libmachine: Using API Version  1
	I0415 11:16:13.251956  362872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:16:13.252300  362872 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:16:13.252507  362872 main.go:141] libmachine: (addons-316289) Calling .GetMachineName
	I0415 11:16:13.252674  362872 main.go:141] libmachine: (addons-316289) Calling .DriverName
	I0415 11:16:13.252842  362872 start.go:159] libmachine.API.Create for "addons-316289" (driver="kvm2")
	I0415 11:16:13.252870  362872 client.go:168] LocalClient.Create starting
	I0415 11:16:13.252915  362872 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18644-354432/.minikube/certs/ca.pem
	I0415 11:16:13.370174  362872 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18644-354432/.minikube/certs/cert.pem
	I0415 11:16:13.482183  362872 main.go:141] libmachine: Running pre-create checks...
	I0415 11:16:13.482210  362872 main.go:141] libmachine: (addons-316289) Calling .PreCreateCheck
	I0415 11:16:13.482744  362872 main.go:141] libmachine: (addons-316289) Calling .GetConfigRaw
	I0415 11:16:13.483191  362872 main.go:141] libmachine: Creating machine...
	I0415 11:16:13.483209  362872 main.go:141] libmachine: (addons-316289) Calling .Create
	I0415 11:16:13.483348  362872 main.go:141] libmachine: (addons-316289) Creating KVM machine...
	I0415 11:16:13.484590  362872 main.go:141] libmachine: (addons-316289) DBG | found existing default KVM network
	I0415 11:16:13.485458  362872 main.go:141] libmachine: (addons-316289) DBG | I0415 11:16:13.485265  362894 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1f0}
	I0415 11:16:13.485486  362872 main.go:141] libmachine: (addons-316289) DBG | created network xml: 
	I0415 11:16:13.485522  362872 main.go:141] libmachine: (addons-316289) DBG | <network>
	I0415 11:16:13.485531  362872 main.go:141] libmachine: (addons-316289) DBG |   <name>mk-addons-316289</name>
	I0415 11:16:13.485541  362872 main.go:141] libmachine: (addons-316289) DBG |   <dns enable='no'/>
	I0415 11:16:13.485547  362872 main.go:141] libmachine: (addons-316289) DBG |   
	I0415 11:16:13.485558  362872 main.go:141] libmachine: (addons-316289) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0415 11:16:13.485571  362872 main.go:141] libmachine: (addons-316289) DBG |     <dhcp>
	I0415 11:16:13.485587  362872 main.go:141] libmachine: (addons-316289) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0415 11:16:13.485591  362872 main.go:141] libmachine: (addons-316289) DBG |     </dhcp>
	I0415 11:16:13.485597  362872 main.go:141] libmachine: (addons-316289) DBG |   </ip>
	I0415 11:16:13.485601  362872 main.go:141] libmachine: (addons-316289) DBG |   
	I0415 11:16:13.485606  362872 main.go:141] libmachine: (addons-316289) DBG | </network>
	I0415 11:16:13.485612  362872 main.go:141] libmachine: (addons-316289) DBG | 
	I0415 11:16:13.491235  362872 main.go:141] libmachine: (addons-316289) DBG | trying to create private KVM network mk-addons-316289 192.168.39.0/24...
	I0415 11:16:13.555260  362872 main.go:141] libmachine: (addons-316289) DBG | private KVM network mk-addons-316289 192.168.39.0/24 created
	I0415 11:16:13.555344  362872 main.go:141] libmachine: (addons-316289) Setting up store path in /home/jenkins/minikube-integration/18644-354432/.minikube/machines/addons-316289 ...
	I0415 11:16:13.555384  362872 main.go:141] libmachine: (addons-316289) Building disk image from file:///home/jenkins/minikube-integration/18644-354432/.minikube/cache/iso/amd64/minikube-v1.33.0-1712854267-18621-amd64.iso
	I0415 11:16:13.555403  362872 main.go:141] libmachine: (addons-316289) DBG | I0415 11:16:13.555281  362894 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18644-354432/.minikube
	I0415 11:16:13.555430  362872 main.go:141] libmachine: (addons-316289) Downloading /home/jenkins/minikube-integration/18644-354432/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18644-354432/.minikube/cache/iso/amd64/minikube-v1.33.0-1712854267-18621-amd64.iso...
	I0415 11:16:13.829339  362872 main.go:141] libmachine: (addons-316289) DBG | I0415 11:16:13.829164  362894 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18644-354432/.minikube/machines/addons-316289/id_rsa...
	I0415 11:16:13.927698  362872 main.go:141] libmachine: (addons-316289) DBG | I0415 11:16:13.927499  362894 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18644-354432/.minikube/machines/addons-316289/addons-316289.rawdisk...
	I0415 11:16:13.927755  362872 main.go:141] libmachine: (addons-316289) DBG | Writing magic tar header
	I0415 11:16:13.927772  362872 main.go:141] libmachine: (addons-316289) DBG | Writing SSH key tar header
	I0415 11:16:13.927780  362872 main.go:141] libmachine: (addons-316289) DBG | I0415 11:16:13.927631  362894 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18644-354432/.minikube/machines/addons-316289 ...
	I0415 11:16:13.927791  362872 main.go:141] libmachine: (addons-316289) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18644-354432/.minikube/machines/addons-316289
	I0415 11:16:13.927801  362872 main.go:141] libmachine: (addons-316289) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18644-354432/.minikube/machines
	I0415 11:16:13.927823  362872 main.go:141] libmachine: (addons-316289) Setting executable bit set on /home/jenkins/minikube-integration/18644-354432/.minikube/machines/addons-316289 (perms=drwx------)
	I0415 11:16:13.927835  362872 main.go:141] libmachine: (addons-316289) Setting executable bit set on /home/jenkins/minikube-integration/18644-354432/.minikube/machines (perms=drwxr-xr-x)
	I0415 11:16:13.927845  362872 main.go:141] libmachine: (addons-316289) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18644-354432/.minikube
	I0415 11:16:13.927860  362872 main.go:141] libmachine: (addons-316289) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18644-354432
	I0415 11:16:13.927866  362872 main.go:141] libmachine: (addons-316289) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0415 11:16:13.927873  362872 main.go:141] libmachine: (addons-316289) DBG | Checking permissions on dir: /home/jenkins
	I0415 11:16:13.927879  362872 main.go:141] libmachine: (addons-316289) DBG | Checking permissions on dir: /home
	I0415 11:16:13.927892  362872 main.go:141] libmachine: (addons-316289) DBG | Skipping /home - not owner
	I0415 11:16:13.927906  362872 main.go:141] libmachine: (addons-316289) Setting executable bit set on /home/jenkins/minikube-integration/18644-354432/.minikube (perms=drwxr-xr-x)
	I0415 11:16:13.927923  362872 main.go:141] libmachine: (addons-316289) Setting executable bit set on /home/jenkins/minikube-integration/18644-354432 (perms=drwxrwxr-x)
	I0415 11:16:13.927937  362872 main.go:141] libmachine: (addons-316289) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0415 11:16:13.927947  362872 main.go:141] libmachine: (addons-316289) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0415 11:16:13.927956  362872 main.go:141] libmachine: (addons-316289) Creating domain...
	I0415 11:16:13.929227  362872 main.go:141] libmachine: (addons-316289) define libvirt domain using xml: 
	I0415 11:16:13.929272  362872 main.go:141] libmachine: (addons-316289) <domain type='kvm'>
	I0415 11:16:13.929285  362872 main.go:141] libmachine: (addons-316289)   <name>addons-316289</name>
	I0415 11:16:13.929294  362872 main.go:141] libmachine: (addons-316289)   <memory unit='MiB'>4000</memory>
	I0415 11:16:13.929338  362872 main.go:141] libmachine: (addons-316289)   <vcpu>2</vcpu>
	I0415 11:16:13.929365  362872 main.go:141] libmachine: (addons-316289)   <features>
	I0415 11:16:13.929375  362872 main.go:141] libmachine: (addons-316289)     <acpi/>
	I0415 11:16:13.929385  362872 main.go:141] libmachine: (addons-316289)     <apic/>
	I0415 11:16:13.929394  362872 main.go:141] libmachine: (addons-316289)     <pae/>
	I0415 11:16:13.929401  362872 main.go:141] libmachine: (addons-316289)     
	I0415 11:16:13.929410  362872 main.go:141] libmachine: (addons-316289)   </features>
	I0415 11:16:13.929427  362872 main.go:141] libmachine: (addons-316289)   <cpu mode='host-passthrough'>
	I0415 11:16:13.929481  362872 main.go:141] libmachine: (addons-316289)   
	I0415 11:16:13.929514  362872 main.go:141] libmachine: (addons-316289)   </cpu>
	I0415 11:16:13.929521  362872 main.go:141] libmachine: (addons-316289)   <os>
	I0415 11:16:13.929527  362872 main.go:141] libmachine: (addons-316289)     <type>hvm</type>
	I0415 11:16:13.929536  362872 main.go:141] libmachine: (addons-316289)     <boot dev='cdrom'/>
	I0415 11:16:13.929541  362872 main.go:141] libmachine: (addons-316289)     <boot dev='hd'/>
	I0415 11:16:13.929549  362872 main.go:141] libmachine: (addons-316289)     <bootmenu enable='no'/>
	I0415 11:16:13.929553  362872 main.go:141] libmachine: (addons-316289)   </os>
	I0415 11:16:13.929558  362872 main.go:141] libmachine: (addons-316289)   <devices>
	I0415 11:16:13.929565  362872 main.go:141] libmachine: (addons-316289)     <disk type='file' device='cdrom'>
	I0415 11:16:13.929574  362872 main.go:141] libmachine: (addons-316289)       <source file='/home/jenkins/minikube-integration/18644-354432/.minikube/machines/addons-316289/boot2docker.iso'/>
	I0415 11:16:13.929585  362872 main.go:141] libmachine: (addons-316289)       <target dev='hdc' bus='scsi'/>
	I0415 11:16:13.929611  362872 main.go:141] libmachine: (addons-316289)       <readonly/>
	I0415 11:16:13.929645  362872 main.go:141] libmachine: (addons-316289)     </disk>
	I0415 11:16:13.929655  362872 main.go:141] libmachine: (addons-316289)     <disk type='file' device='disk'>
	I0415 11:16:13.929667  362872 main.go:141] libmachine: (addons-316289)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0415 11:16:13.929680  362872 main.go:141] libmachine: (addons-316289)       <source file='/home/jenkins/minikube-integration/18644-354432/.minikube/machines/addons-316289/addons-316289.rawdisk'/>
	I0415 11:16:13.929687  362872 main.go:141] libmachine: (addons-316289)       <target dev='hda' bus='virtio'/>
	I0415 11:16:13.929693  362872 main.go:141] libmachine: (addons-316289)     </disk>
	I0415 11:16:13.929702  362872 main.go:141] libmachine: (addons-316289)     <interface type='network'>
	I0415 11:16:13.929708  362872 main.go:141] libmachine: (addons-316289)       <source network='mk-addons-316289'/>
	I0415 11:16:13.929715  362872 main.go:141] libmachine: (addons-316289)       <model type='virtio'/>
	I0415 11:16:13.929729  362872 main.go:141] libmachine: (addons-316289)     </interface>
	I0415 11:16:13.929749  362872 main.go:141] libmachine: (addons-316289)     <interface type='network'>
	I0415 11:16:13.929763  362872 main.go:141] libmachine: (addons-316289)       <source network='default'/>
	I0415 11:16:13.929774  362872 main.go:141] libmachine: (addons-316289)       <model type='virtio'/>
	I0415 11:16:13.929799  362872 main.go:141] libmachine: (addons-316289)     </interface>
	I0415 11:16:13.929809  362872 main.go:141] libmachine: (addons-316289)     <serial type='pty'>
	I0415 11:16:13.929817  362872 main.go:141] libmachine: (addons-316289)       <target port='0'/>
	I0415 11:16:13.929826  362872 main.go:141] libmachine: (addons-316289)     </serial>
	I0415 11:16:13.929839  362872 main.go:141] libmachine: (addons-316289)     <console type='pty'>
	I0415 11:16:13.929847  362872 main.go:141] libmachine: (addons-316289)       <target type='serial' port='0'/>
	I0415 11:16:13.929852  362872 main.go:141] libmachine: (addons-316289)     </console>
	I0415 11:16:13.929856  362872 main.go:141] libmachine: (addons-316289)     <rng model='virtio'>
	I0415 11:16:13.929866  362872 main.go:141] libmachine: (addons-316289)       <backend model='random'>/dev/random</backend>
	I0415 11:16:13.929871  362872 main.go:141] libmachine: (addons-316289)     </rng>
	I0415 11:16:13.929877  362872 main.go:141] libmachine: (addons-316289)     
	I0415 11:16:13.929885  362872 main.go:141] libmachine: (addons-316289)     
	I0415 11:16:13.929891  362872 main.go:141] libmachine: (addons-316289)   </devices>
	I0415 11:16:13.929897  362872 main.go:141] libmachine: (addons-316289) </domain>
	I0415 11:16:13.929905  362872 main.go:141] libmachine: (addons-316289) 
	I0415 11:16:13.936274  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:0a:64:66 in network default
	I0415 11:16:13.936879  362872 main.go:141] libmachine: (addons-316289) Ensuring networks are active...
	I0415 11:16:13.936919  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:13.937651  362872 main.go:141] libmachine: (addons-316289) Ensuring network default is active
	I0415 11:16:13.938004  362872 main.go:141] libmachine: (addons-316289) Ensuring network mk-addons-316289 is active
	I0415 11:16:13.938453  362872 main.go:141] libmachine: (addons-316289) Getting domain xml...
	I0415 11:16:13.939118  362872 main.go:141] libmachine: (addons-316289) Creating domain...
	I0415 11:16:15.339965  362872 main.go:141] libmachine: (addons-316289) Waiting to get IP...
	I0415 11:16:15.340791  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:15.341291  362872 main.go:141] libmachine: (addons-316289) DBG | unable to find current IP address of domain addons-316289 in network mk-addons-316289
	I0415 11:16:15.341346  362872 main.go:141] libmachine: (addons-316289) DBG | I0415 11:16:15.341278  362894 retry.go:31] will retry after 277.863214ms: waiting for machine to come up
	I0415 11:16:15.620842  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:15.621277  362872 main.go:141] libmachine: (addons-316289) DBG | unable to find current IP address of domain addons-316289 in network mk-addons-316289
	I0415 11:16:15.621307  362872 main.go:141] libmachine: (addons-316289) DBG | I0415 11:16:15.621222  362894 retry.go:31] will retry after 336.171084ms: waiting for machine to come up
	I0415 11:16:15.958662  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:15.959018  362872 main.go:141] libmachine: (addons-316289) DBG | unable to find current IP address of domain addons-316289 in network mk-addons-316289
	I0415 11:16:15.959050  362872 main.go:141] libmachine: (addons-316289) DBG | I0415 11:16:15.958979  362894 retry.go:31] will retry after 445.385324ms: waiting for machine to come up
	I0415 11:16:16.405556  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:16.405980  362872 main.go:141] libmachine: (addons-316289) DBG | unable to find current IP address of domain addons-316289 in network mk-addons-316289
	I0415 11:16:16.406009  362872 main.go:141] libmachine: (addons-316289) DBG | I0415 11:16:16.405929  362894 retry.go:31] will retry after 554.265805ms: waiting for machine to come up
	I0415 11:16:16.961828  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:16.962233  362872 main.go:141] libmachine: (addons-316289) DBG | unable to find current IP address of domain addons-316289 in network mk-addons-316289
	I0415 11:16:16.962259  362872 main.go:141] libmachine: (addons-316289) DBG | I0415 11:16:16.962207  362894 retry.go:31] will retry after 584.325056ms: waiting for machine to come up
	I0415 11:16:17.548323  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:17.548900  362872 main.go:141] libmachine: (addons-316289) DBG | unable to find current IP address of domain addons-316289 in network mk-addons-316289
	I0415 11:16:17.548938  362872 main.go:141] libmachine: (addons-316289) DBG | I0415 11:16:17.548856  362894 retry.go:31] will retry after 601.299342ms: waiting for machine to come up
	I0415 11:16:18.151942  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:18.152431  362872 main.go:141] libmachine: (addons-316289) DBG | unable to find current IP address of domain addons-316289 in network mk-addons-316289
	I0415 11:16:18.152502  362872 main.go:141] libmachine: (addons-316289) DBG | I0415 11:16:18.152387  362894 retry.go:31] will retry after 991.91495ms: waiting for machine to come up
	I0415 11:16:19.146312  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:19.146705  362872 main.go:141] libmachine: (addons-316289) DBG | unable to find current IP address of domain addons-316289 in network mk-addons-316289
	I0415 11:16:19.146733  362872 main.go:141] libmachine: (addons-316289) DBG | I0415 11:16:19.146655  362894 retry.go:31] will retry after 1.153759469s: waiting for machine to come up
	I0415 11:16:20.301793  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:20.302178  362872 main.go:141] libmachine: (addons-316289) DBG | unable to find current IP address of domain addons-316289 in network mk-addons-316289
	I0415 11:16:20.302223  362872 main.go:141] libmachine: (addons-316289) DBG | I0415 11:16:20.302118  362894 retry.go:31] will retry after 1.836836019s: waiting for machine to come up
	I0415 11:16:22.141408  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:22.141877  362872 main.go:141] libmachine: (addons-316289) DBG | unable to find current IP address of domain addons-316289 in network mk-addons-316289
	I0415 11:16:22.141909  362872 main.go:141] libmachine: (addons-316289) DBG | I0415 11:16:22.141823  362894 retry.go:31] will retry after 1.773804928s: waiting for machine to come up
	I0415 11:16:23.918455  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:23.918995  362872 main.go:141] libmachine: (addons-316289) DBG | unable to find current IP address of domain addons-316289 in network mk-addons-316289
	I0415 11:16:23.919023  362872 main.go:141] libmachine: (addons-316289) DBG | I0415 11:16:23.918948  362894 retry.go:31] will retry after 2.714878652s: waiting for machine to come up
	I0415 11:16:26.636820  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:26.637295  362872 main.go:141] libmachine: (addons-316289) DBG | unable to find current IP address of domain addons-316289 in network mk-addons-316289
	I0415 11:16:26.637329  362872 main.go:141] libmachine: (addons-316289) DBG | I0415 11:16:26.637259  362894 retry.go:31] will retry after 2.736733182s: waiting for machine to come up
	I0415 11:16:29.375677  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:29.376072  362872 main.go:141] libmachine: (addons-316289) DBG | unable to find current IP address of domain addons-316289 in network mk-addons-316289
	I0415 11:16:29.376126  362872 main.go:141] libmachine: (addons-316289) DBG | I0415 11:16:29.376042  362894 retry.go:31] will retry after 2.98694626s: waiting for machine to come up
	I0415 11:16:32.366443  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:32.366994  362872 main.go:141] libmachine: (addons-316289) DBG | unable to find current IP address of domain addons-316289 in network mk-addons-316289
	I0415 11:16:32.367019  362872 main.go:141] libmachine: (addons-316289) DBG | I0415 11:16:32.366939  362894 retry.go:31] will retry after 3.997894163s: waiting for machine to come up
	I0415 11:16:36.368487  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:36.369099  362872 main.go:141] libmachine: (addons-316289) Found IP for machine: 192.168.39.62
	I0415 11:16:36.369137  362872 main.go:141] libmachine: (addons-316289) Reserving static IP address...
	I0415 11:16:36.369152  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has current primary IP address 192.168.39.62 and MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:36.369606  362872 main.go:141] libmachine: (addons-316289) DBG | unable to find host DHCP lease matching {name: "addons-316289", mac: "52:54:00:f9:92:2f", ip: "192.168.39.62"} in network mk-addons-316289
	I0415 11:16:36.444790  362872 main.go:141] libmachine: (addons-316289) DBG | Getting to WaitForSSH function...
	I0415 11:16:36.444835  362872 main.go:141] libmachine: (addons-316289) Reserved static IP address: 192.168.39.62
	I0415 11:16:36.444858  362872 main.go:141] libmachine: (addons-316289) Waiting for SSH to be available...
	I0415 11:16:36.447810  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:36.448324  362872 main.go:141] libmachine: (addons-316289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:92:2f", ip: ""} in network mk-addons-316289: {Iface:virbr1 ExpiryTime:2024-04-15 12:16:28 +0000 UTC Type:0 Mac:52:54:00:f9:92:2f Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f9:92:2f}
	I0415 11:16:36.448357  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined IP address 192.168.39.62 and MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:36.448566  362872 main.go:141] libmachine: (addons-316289) DBG | Using SSH client type: external
	I0415 11:16:36.448590  362872 main.go:141] libmachine: (addons-316289) DBG | Using SSH private key: /home/jenkins/minikube-integration/18644-354432/.minikube/machines/addons-316289/id_rsa (-rw-------)
	I0415 11:16:36.448633  362872 main.go:141] libmachine: (addons-316289) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18644-354432/.minikube/machines/addons-316289/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0415 11:16:36.448657  362872 main.go:141] libmachine: (addons-316289) DBG | About to run SSH command:
	I0415 11:16:36.448669  362872 main.go:141] libmachine: (addons-316289) DBG | exit 0
	I0415 11:16:36.575998  362872 main.go:141] libmachine: (addons-316289) DBG | SSH cmd err, output: <nil>: 
	I0415 11:16:36.576290  362872 main.go:141] libmachine: (addons-316289) KVM machine creation complete!
	I0415 11:16:36.576611  362872 main.go:141] libmachine: (addons-316289) Calling .GetConfigRaw
	I0415 11:16:36.577274  362872 main.go:141] libmachine: (addons-316289) Calling .DriverName
	I0415 11:16:36.577563  362872 main.go:141] libmachine: (addons-316289) Calling .DriverName
	I0415 11:16:36.577851  362872 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0415 11:16:36.577871  362872 main.go:141] libmachine: (addons-316289) Calling .GetState
	I0415 11:16:36.579259  362872 main.go:141] libmachine: Detecting operating system of created instance...
	I0415 11:16:36.579277  362872 main.go:141] libmachine: Waiting for SSH to be available...
	I0415 11:16:36.579284  362872 main.go:141] libmachine: Getting to WaitForSSH function...
	I0415 11:16:36.579293  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHHostname
	I0415 11:16:36.581675  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:36.582080  362872 main.go:141] libmachine: (addons-316289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:92:2f", ip: ""} in network mk-addons-316289: {Iface:virbr1 ExpiryTime:2024-04-15 12:16:28 +0000 UTC Type:0 Mac:52:54:00:f9:92:2f Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-316289 Clientid:01:52:54:00:f9:92:2f}
	I0415 11:16:36.582125  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined IP address 192.168.39.62 and MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:36.582282  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHPort
	I0415 11:16:36.582500  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHKeyPath
	I0415 11:16:36.582761  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHKeyPath
	I0415 11:16:36.582983  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHUsername
	I0415 11:16:36.583208  362872 main.go:141] libmachine: Using SSH client type: native
	I0415 11:16:36.583415  362872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0415 11:16:36.583429  362872 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0415 11:16:36.691447  362872 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 11:16:36.691487  362872 main.go:141] libmachine: Detecting the provisioner...
	I0415 11:16:36.691498  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHHostname
	I0415 11:16:36.694898  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:36.695385  362872 main.go:141] libmachine: (addons-316289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:92:2f", ip: ""} in network mk-addons-316289: {Iface:virbr1 ExpiryTime:2024-04-15 12:16:28 +0000 UTC Type:0 Mac:52:54:00:f9:92:2f Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-316289 Clientid:01:52:54:00:f9:92:2f}
	I0415 11:16:36.695480  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined IP address 192.168.39.62 and MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:36.695722  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHPort
	I0415 11:16:36.696026  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHKeyPath
	I0415 11:16:36.696221  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHKeyPath
	I0415 11:16:36.696391  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHUsername
	I0415 11:16:36.696562  362872 main.go:141] libmachine: Using SSH client type: native
	I0415 11:16:36.696788  362872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0415 11:16:36.696805  362872 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0415 11:16:36.805227  362872 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0415 11:16:36.805444  362872 main.go:141] libmachine: found compatible host: buildroot
	I0415 11:16:36.805464  362872 main.go:141] libmachine: Provisioning with buildroot...
	I0415 11:16:36.805488  362872 main.go:141] libmachine: (addons-316289) Calling .GetMachineName
	I0415 11:16:36.805870  362872 buildroot.go:166] provisioning hostname "addons-316289"
	I0415 11:16:36.805902  362872 main.go:141] libmachine: (addons-316289) Calling .GetMachineName
	I0415 11:16:36.806208  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHHostname
	I0415 11:16:36.809116  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:36.809531  362872 main.go:141] libmachine: (addons-316289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:92:2f", ip: ""} in network mk-addons-316289: {Iface:virbr1 ExpiryTime:2024-04-15 12:16:28 +0000 UTC Type:0 Mac:52:54:00:f9:92:2f Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-316289 Clientid:01:52:54:00:f9:92:2f}
	I0415 11:16:36.809560  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined IP address 192.168.39.62 and MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:36.809766  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHPort
	I0415 11:16:36.810044  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHKeyPath
	I0415 11:16:36.810230  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHKeyPath
	I0415 11:16:36.810369  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHUsername
	I0415 11:16:36.810581  362872 main.go:141] libmachine: Using SSH client type: native
	I0415 11:16:36.810777  362872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0415 11:16:36.810792  362872 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-316289 && echo "addons-316289" | sudo tee /etc/hostname
	I0415 11:16:36.934301  362872 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-316289
	
	I0415 11:16:36.934334  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHHostname
	I0415 11:16:36.937948  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:36.938286  362872 main.go:141] libmachine: (addons-316289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:92:2f", ip: ""} in network mk-addons-316289: {Iface:virbr1 ExpiryTime:2024-04-15 12:16:28 +0000 UTC Type:0 Mac:52:54:00:f9:92:2f Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-316289 Clientid:01:52:54:00:f9:92:2f}
	I0415 11:16:36.938336  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined IP address 192.168.39.62 and MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:36.938639  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHPort
	I0415 11:16:36.938893  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHKeyPath
	I0415 11:16:36.939076  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHKeyPath
	I0415 11:16:36.939267  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHUsername
	I0415 11:16:36.939492  362872 main.go:141] libmachine: Using SSH client type: native
	I0415 11:16:36.939710  362872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0415 11:16:36.939731  362872 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-316289' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-316289/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-316289' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 11:16:37.057401  362872 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 11:16:37.057442  362872 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18644-354432/.minikube CaCertPath:/home/jenkins/minikube-integration/18644-354432/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18644-354432/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18644-354432/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18644-354432/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18644-354432/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18644-354432/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18644-354432/.minikube}
	I0415 11:16:37.057471  362872 buildroot.go:174] setting up certificates
	I0415 11:16:37.057487  362872 provision.go:84] configureAuth start
	I0415 11:16:37.057498  362872 main.go:141] libmachine: (addons-316289) Calling .GetMachineName
	I0415 11:16:37.057859  362872 main.go:141] libmachine: (addons-316289) Calling .GetIP
	I0415 11:16:37.060771  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:37.061244  362872 main.go:141] libmachine: (addons-316289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:92:2f", ip: ""} in network mk-addons-316289: {Iface:virbr1 ExpiryTime:2024-04-15 12:16:28 +0000 UTC Type:0 Mac:52:54:00:f9:92:2f Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-316289 Clientid:01:52:54:00:f9:92:2f}
	I0415 11:16:37.061279  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined IP address 192.168.39.62 and MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:37.061497  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHHostname
	I0415 11:16:37.064202  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:37.064528  362872 main.go:141] libmachine: (addons-316289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:92:2f", ip: ""} in network mk-addons-316289: {Iface:virbr1 ExpiryTime:2024-04-15 12:16:28 +0000 UTC Type:0 Mac:52:54:00:f9:92:2f Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-316289 Clientid:01:52:54:00:f9:92:2f}
	I0415 11:16:37.064551  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined IP address 192.168.39.62 and MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:37.064712  362872 provision.go:143] copyHostCerts
	I0415 11:16:37.064810  362872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18644-354432/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18644-354432/.minikube/ca.pem (1082 bytes)
	I0415 11:16:37.064994  362872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18644-354432/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18644-354432/.minikube/cert.pem (1123 bytes)
	I0415 11:16:37.065193  362872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18644-354432/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18644-354432/.minikube/key.pem (1675 bytes)
	I0415 11:16:37.065297  362872 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18644-354432/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18644-354432/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18644-354432/.minikube/certs/ca-key.pem org=jenkins.addons-316289 san=[127.0.0.1 192.168.39.62 addons-316289 localhost minikube]
	I0415 11:16:37.331411  362872 provision.go:177] copyRemoteCerts
	I0415 11:16:37.331509  362872 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 11:16:37.331545  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHHostname
	I0415 11:16:37.334833  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:37.335284  362872 main.go:141] libmachine: (addons-316289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:92:2f", ip: ""} in network mk-addons-316289: {Iface:virbr1 ExpiryTime:2024-04-15 12:16:28 +0000 UTC Type:0 Mac:52:54:00:f9:92:2f Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-316289 Clientid:01:52:54:00:f9:92:2f}
	I0415 11:16:37.335321  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined IP address 192.168.39.62 and MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:37.335526  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHPort
	I0415 11:16:37.335789  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHKeyPath
	I0415 11:16:37.335990  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHUsername
	I0415 11:16:37.336194  362872 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18644-354432/.minikube/machines/addons-316289/id_rsa Username:docker}
	I0415 11:16:37.423700  362872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18644-354432/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0415 11:16:37.451765  362872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18644-354432/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0415 11:16:37.478636  362872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18644-354432/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0415 11:16:37.504572  362872 provision.go:87] duration metric: took 447.069493ms to configureAuth
	I0415 11:16:37.504610  362872 buildroot.go:189] setting minikube options for container-runtime
	I0415 11:16:37.504823  362872 config.go:182] Loaded profile config "addons-316289": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0415 11:16:37.504850  362872 main.go:141] libmachine: Checking connection to Docker...
	I0415 11:16:37.504861  362872 main.go:141] libmachine: (addons-316289) Calling .GetURL
	I0415 11:16:37.506094  362872 main.go:141] libmachine: (addons-316289) DBG | Using libvirt version 6000000
	I0415 11:16:37.508658  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:37.509135  362872 main.go:141] libmachine: (addons-316289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:92:2f", ip: ""} in network mk-addons-316289: {Iface:virbr1 ExpiryTime:2024-04-15 12:16:28 +0000 UTC Type:0 Mac:52:54:00:f9:92:2f Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-316289 Clientid:01:52:54:00:f9:92:2f}
	I0415 11:16:37.509167  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined IP address 192.168.39.62 and MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:37.509296  362872 main.go:141] libmachine: Docker is up and running!
	I0415 11:16:37.509314  362872 main.go:141] libmachine: Reticulating splines...
	I0415 11:16:37.509324  362872 client.go:171] duration metric: took 24.256442953s to LocalClient.Create
	I0415 11:16:37.509348  362872 start.go:167] duration metric: took 24.256506089s to libmachine.API.Create "addons-316289"
	I0415 11:16:37.509365  362872 start.go:293] postStartSetup for "addons-316289" (driver="kvm2")
	I0415 11:16:37.509381  362872 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 11:16:37.509414  362872 main.go:141] libmachine: (addons-316289) Calling .DriverName
	I0415 11:16:37.509717  362872 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 11:16:37.509742  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHHostname
	I0415 11:16:37.512622  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:37.512956  362872 main.go:141] libmachine: (addons-316289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:92:2f", ip: ""} in network mk-addons-316289: {Iface:virbr1 ExpiryTime:2024-04-15 12:16:28 +0000 UTC Type:0 Mac:52:54:00:f9:92:2f Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-316289 Clientid:01:52:54:00:f9:92:2f}
	I0415 11:16:37.512990  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined IP address 192.168.39.62 and MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:37.513221  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHPort
	I0415 11:16:37.513407  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHKeyPath
	I0415 11:16:37.513629  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHUsername
	I0415 11:16:37.513837  362872 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18644-354432/.minikube/machines/addons-316289/id_rsa Username:docker}
	I0415 11:16:37.598654  362872 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 11:16:37.603597  362872 info.go:137] Remote host: Buildroot 2023.02.9
	I0415 11:16:37.603630  362872 filesync.go:126] Scanning /home/jenkins/minikube-integration/18644-354432/.minikube/addons for local assets ...
	I0415 11:16:37.603722  362872 filesync.go:126] Scanning /home/jenkins/minikube-integration/18644-354432/.minikube/files for local assets ...
	I0415 11:16:37.603752  362872 start.go:296] duration metric: took 94.379708ms for postStartSetup
	I0415 11:16:37.603796  362872 main.go:141] libmachine: (addons-316289) Calling .GetConfigRaw
	I0415 11:16:37.604576  362872 main.go:141] libmachine: (addons-316289) Calling .GetIP
	I0415 11:16:37.608073  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:37.608491  362872 main.go:141] libmachine: (addons-316289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:92:2f", ip: ""} in network mk-addons-316289: {Iface:virbr1 ExpiryTime:2024-04-15 12:16:28 +0000 UTC Type:0 Mac:52:54:00:f9:92:2f Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-316289 Clientid:01:52:54:00:f9:92:2f}
	I0415 11:16:37.608520  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined IP address 192.168.39.62 and MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:37.608799  362872 profile.go:143] Saving config to /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/config.json ...
	I0415 11:16:37.609041  362872 start.go:128] duration metric: took 24.375059569s to createHost
	I0415 11:16:37.609066  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHHostname
	I0415 11:16:37.611728  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:37.612243  362872 main.go:141] libmachine: (addons-316289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:92:2f", ip: ""} in network mk-addons-316289: {Iface:virbr1 ExpiryTime:2024-04-15 12:16:28 +0000 UTC Type:0 Mac:52:54:00:f9:92:2f Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-316289 Clientid:01:52:54:00:f9:92:2f}
	I0415 11:16:37.612264  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined IP address 192.168.39.62 and MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:37.612453  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHPort
	I0415 11:16:37.612686  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHKeyPath
	I0415 11:16:37.612846  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHKeyPath
	I0415 11:16:37.613052  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHUsername
	I0415 11:16:37.613227  362872 main.go:141] libmachine: Using SSH client type: native
	I0415 11:16:37.613404  362872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0415 11:16:37.613415  362872 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0415 11:16:37.725034  362872 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713179797.707710648
	
	I0415 11:16:37.725062  362872 fix.go:216] guest clock: 1713179797.707710648
	I0415 11:16:37.725071  362872 fix.go:229] Guest: 2024-04-15 11:16:37.707710648 +0000 UTC Remote: 2024-04-15 11:16:37.609054436 +0000 UTC m=+24.491471777 (delta=98.656212ms)
	I0415 11:16:37.725099  362872 fix.go:200] guest clock delta is within tolerance: 98.656212ms
	I0415 11:16:37.725106  362872 start.go:83] releasing machines lock for "addons-316289", held for 24.491204672s
	I0415 11:16:37.725129  362872 main.go:141] libmachine: (addons-316289) Calling .DriverName
	I0415 11:16:37.725469  362872 main.go:141] libmachine: (addons-316289) Calling .GetIP
	I0415 11:16:37.728013  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:37.728467  362872 main.go:141] libmachine: (addons-316289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:92:2f", ip: ""} in network mk-addons-316289: {Iface:virbr1 ExpiryTime:2024-04-15 12:16:28 +0000 UTC Type:0 Mac:52:54:00:f9:92:2f Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-316289 Clientid:01:52:54:00:f9:92:2f}
	I0415 11:16:37.728493  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined IP address 192.168.39.62 and MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:37.728638  362872 main.go:141] libmachine: (addons-316289) Calling .DriverName
	I0415 11:16:37.729367  362872 main.go:141] libmachine: (addons-316289) Calling .DriverName
	I0415 11:16:37.729573  362872 main.go:141] libmachine: (addons-316289) Calling .DriverName
	I0415 11:16:37.729713  362872 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 11:16:37.729765  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHHostname
	I0415 11:16:37.729876  362872 ssh_runner.go:195] Run: cat /version.json
	I0415 11:16:37.729903  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHHostname
	I0415 11:16:37.732597  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:37.732625  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:37.732950  362872 main.go:141] libmachine: (addons-316289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:92:2f", ip: ""} in network mk-addons-316289: {Iface:virbr1 ExpiryTime:2024-04-15 12:16:28 +0000 UTC Type:0 Mac:52:54:00:f9:92:2f Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-316289 Clientid:01:52:54:00:f9:92:2f}
	I0415 11:16:37.732992  362872 main.go:141] libmachine: (addons-316289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:92:2f", ip: ""} in network mk-addons-316289: {Iface:virbr1 ExpiryTime:2024-04-15 12:16:28 +0000 UTC Type:0 Mac:52:54:00:f9:92:2f Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-316289 Clientid:01:52:54:00:f9:92:2f}
	I0415 11:16:37.733018  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined IP address 192.168.39.62 and MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:37.733036  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined IP address 192.168.39.62 and MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:37.733161  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHPort
	I0415 11:16:37.733162  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHPort
	I0415 11:16:37.733342  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHKeyPath
	I0415 11:16:37.733357  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHKeyPath
	I0415 11:16:37.733512  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHUsername
	I0415 11:16:37.733529  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHUsername
	I0415 11:16:37.733695  362872 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18644-354432/.minikube/machines/addons-316289/id_rsa Username:docker}
	I0415 11:16:37.733699  362872 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18644-354432/.minikube/machines/addons-316289/id_rsa Username:docker}
	I0415 11:16:37.813404  362872 ssh_runner.go:195] Run: systemctl --version
	I0415 11:16:37.855630  362872 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0415 11:16:37.862312  362872 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0415 11:16:37.862403  362872 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0415 11:16:37.879330  362872 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0415 11:16:37.879363  362872 start.go:494] detecting cgroup driver to use...
	I0415 11:16:37.879440  362872 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0415 11:16:37.910487  362872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 11:16:37.925101  362872 docker.go:217] disabling cri-docker service (if available) ...
	I0415 11:16:37.925182  362872 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0415 11:16:37.940033  362872 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0415 11:16:37.954750  362872 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0415 11:16:38.073985  362872 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0415 11:16:38.247660  362872 docker.go:233] disabling docker service ...
	I0415 11:16:38.247743  362872 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0415 11:16:38.263058  362872 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0415 11:16:38.277289  362872 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0415 11:16:38.417401  362872 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0415 11:16:38.549323  362872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0415 11:16:38.566197  362872 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 11:16:38.588232  362872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0415 11:16:38.599481  362872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0415 11:16:38.611035  362872 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0415 11:16:38.611116  362872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0415 11:16:38.622402  362872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 11:16:38.634014  362872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0415 11:16:38.646710  362872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 11:16:38.658589  362872 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 11:16:38.670526  362872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0415 11:16:38.682350  362872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0415 11:16:38.694374  362872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0415 11:16:38.706798  362872 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 11:16:38.717982  362872 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0415 11:16:38.718050  362872 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0415 11:16:38.735401  362872 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 11:16:38.746610  362872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 11:16:38.897211  362872 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0415 11:16:38.928900  362872 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0415 11:16:38.929021  362872 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0415 11:16:38.933877  362872 retry.go:31] will retry after 722.8924ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0415 11:16:39.658046  362872 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0415 11:16:39.664511  362872 start.go:562] Will wait 60s for crictl version
	I0415 11:16:39.664599  362872 ssh_runner.go:195] Run: which crictl
	I0415 11:16:39.668932  362872 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0415 11:16:39.708406  362872 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.15
	RuntimeApiVersion:  v1
	I0415 11:16:39.708490  362872 ssh_runner.go:195] Run: containerd --version
	I0415 11:16:39.738353  362872 ssh_runner.go:195] Run: containerd --version
	I0415 11:16:39.767448  362872 out.go:177] * Preparing Kubernetes v1.29.3 on containerd 1.7.15 ...
	I0415 11:16:39.769320  362872 main.go:141] libmachine: (addons-316289) Calling .GetIP
	I0415 11:16:39.772080  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:39.772496  362872 main.go:141] libmachine: (addons-316289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:92:2f", ip: ""} in network mk-addons-316289: {Iface:virbr1 ExpiryTime:2024-04-15 12:16:28 +0000 UTC Type:0 Mac:52:54:00:f9:92:2f Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-316289 Clientid:01:52:54:00:f9:92:2f}
	I0415 11:16:39.772532  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined IP address 192.168.39.62 and MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:16:39.772773  362872 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0415 11:16:39.777477  362872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 11:16:39.790705  362872 kubeadm.go:877] updating cluster {Name:addons-316289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.
3 ClusterName:addons-316289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.62 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0415 11:16:39.790870  362872 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0415 11:16:39.790932  362872 ssh_runner.go:195] Run: sudo crictl images --output json
	I0415 11:16:39.826144  362872 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0415 11:16:39.826230  362872 ssh_runner.go:195] Run: which lz4
	I0415 11:16:39.830625  362872 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0415 11:16:39.835071  362872 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0415 11:16:39.835109  362872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18644-354432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (402346652 bytes)
	I0415 11:16:41.264457  362872 containerd.go:563] duration metric: took 1.433860157s to copy over tarball
	I0415 11:16:41.264581  362872 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0415 11:16:43.690895  362872 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.426271894s)
	I0415 11:16:43.690932  362872 containerd.go:570] duration metric: took 2.426438301s to extract the tarball
	I0415 11:16:43.690940  362872 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0415 11:16:43.729404  362872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 11:16:43.864107  362872 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0415 11:16:43.896243  362872 ssh_runner.go:195] Run: sudo crictl images --output json
	I0415 11:16:43.939279  362872 retry.go:31] will retry after 185.345384ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-15T11:16:43Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0415 11:16:44.125851  362872 ssh_runner.go:195] Run: sudo crictl images --output json
	I0415 11:16:44.171582  362872 containerd.go:627] all images are preloaded for containerd runtime.
	I0415 11:16:44.171613  362872 cache_images.go:84] Images are preloaded, skipping loading
	I0415 11:16:44.171625  362872 kubeadm.go:928] updating node { 192.168.39.62 8443 v1.29.3 containerd true true} ...
	I0415 11:16:44.171823  362872 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-316289 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:addons-316289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0415 11:16:44.171903  362872 ssh_runner.go:195] Run: sudo crictl info
	I0415 11:16:44.210474  362872 cni.go:84] Creating CNI manager for ""
	I0415 11:16:44.210500  362872 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0415 11:16:44.210515  362872 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0415 11:16:44.210554  362872 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.62 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-316289 NodeName:addons-316289 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.62"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.62 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0415 11:16:44.210683  362872 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.62
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-316289"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.62
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.62"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0415 11:16:44.210752  362872 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0415 11:16:44.222333  362872 binaries.go:44] Found k8s binaries, skipping transfer
	I0415 11:16:44.222418  362872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0415 11:16:44.233568  362872 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0415 11:16:44.253031  362872 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0415 11:16:44.271910  362872 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2170 bytes)
	I0415 11:16:44.290579  362872 ssh_runner.go:195] Run: grep 192.168.39.62	control-plane.minikube.internal$ /etc/hosts
	I0415 11:16:44.294966  362872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.62	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 11:16:44.308935  362872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 11:16:44.442478  362872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 11:16:44.466373  362872 certs.go:68] Setting up /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289 for IP: 192.168.39.62
	I0415 11:16:44.466408  362872 certs.go:194] generating shared ca certs ...
	I0415 11:16:44.466431  362872 certs.go:226] acquiring lock for ca certs: {Name:mk26b6161875cb5697900988a62f1ac3f787f757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 11:16:44.466624  362872 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18644-354432/.minikube/ca.key
	I0415 11:16:44.570873  362872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18644-354432/.minikube/ca.crt ...
	I0415 11:16:44.570910  362872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18644-354432/.minikube/ca.crt: {Name:mkd5a80a2b03ad85d819f808a06f84d01cf91b9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 11:16:44.571140  362872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18644-354432/.minikube/ca.key ...
	I0415 11:16:44.571159  362872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18644-354432/.minikube/ca.key: {Name:mka9a64429577256123709e427608558b839eeaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 11:16:44.571265  362872 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18644-354432/.minikube/proxy-client-ca.key
	I0415 11:16:44.896874  362872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18644-354432/.minikube/proxy-client-ca.crt ...
	I0415 11:16:44.896909  362872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18644-354432/.minikube/proxy-client-ca.crt: {Name:mkdfcc8a0079ed73cff405e1ef3d9ec3b961ebd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 11:16:44.897115  362872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18644-354432/.minikube/proxy-client-ca.key ...
	I0415 11:16:44.897134  362872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18644-354432/.minikube/proxy-client-ca.key: {Name:mk416af15cb5fb90c6483c3f45920acc1ebfd861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 11:16:44.897244  362872 certs.go:256] generating profile certs ...
	I0415 11:16:44.897338  362872 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.key
	I0415 11:16:44.897358  362872 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.crt with IP's: []
	I0415 11:16:45.044372  362872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.crt ...
	I0415 11:16:45.044414  362872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.crt: {Name:mk738be8a5b75cd1dddb8d567783ec3345944b62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 11:16:45.044635  362872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.key ...
	I0415 11:16:45.044653  362872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.key: {Name:mke3038bf70065d8c290aca0c8d2aaae792f56cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 11:16:45.044777  362872 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/apiserver.key.ad732070
	I0415 11:16:45.044815  362872 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/apiserver.crt.ad732070 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.62]
	I0415 11:16:45.254835  362872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/apiserver.crt.ad732070 ...
	I0415 11:16:45.254893  362872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/apiserver.crt.ad732070: {Name:mka627110f094e1ad49fe9a9828a4337a2f13919 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 11:16:45.255071  362872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/apiserver.key.ad732070 ...
	I0415 11:16:45.255087  362872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/apiserver.key.ad732070: {Name:mk91499861d0f71b119f9ca77b26d3ae47caccda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 11:16:45.255154  362872 certs.go:381] copying /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/apiserver.crt.ad732070 -> /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/apiserver.crt
	I0415 11:16:45.255228  362872 certs.go:385] copying /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/apiserver.key.ad732070 -> /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/apiserver.key
	I0415 11:16:45.255272  362872 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/proxy-client.key
	I0415 11:16:45.255290  362872 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/proxy-client.crt with IP's: []
	I0415 11:16:45.322119  362872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/proxy-client.crt ...
	I0415 11:16:45.322152  362872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/proxy-client.crt: {Name:mk9944252653c41eb55bd592231911b2f6ba478b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 11:16:45.322324  362872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/proxy-client.key ...
	I0415 11:16:45.322338  362872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/proxy-client.key: {Name:mked675ce9a68863f79d3349d1dbd9182fd54124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 11:16:45.322526  362872 certs.go:484] found cert: /home/jenkins/minikube-integration/18644-354432/.minikube/certs/ca-key.pem (1675 bytes)
	I0415 11:16:45.322564  362872 certs.go:484] found cert: /home/jenkins/minikube-integration/18644-354432/.minikube/certs/ca.pem (1082 bytes)
	I0415 11:16:45.322588  362872 certs.go:484] found cert: /home/jenkins/minikube-integration/18644-354432/.minikube/certs/cert.pem (1123 bytes)
	I0415 11:16:45.322614  362872 certs.go:484] found cert: /home/jenkins/minikube-integration/18644-354432/.minikube/certs/key.pem (1675 bytes)
	I0415 11:16:45.323290  362872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18644-354432/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0415 11:16:45.360191  362872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18644-354432/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0415 11:16:45.388024  362872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18644-354432/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0415 11:16:45.415131  362872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18644-354432/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0415 11:16:45.444625  362872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0415 11:16:45.473325  362872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0415 11:16:45.500237  362872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0415 11:16:45.526808  362872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0415 11:16:45.552679  362872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18644-354432/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0415 11:16:45.580634  362872 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0415 11:16:45.599129  362872 ssh_runner.go:195] Run: openssl version
	I0415 11:16:45.605462  362872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0415 11:16:45.617770  362872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0415 11:16:45.623001  362872 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 11:16 /usr/share/ca-certificates/minikubeCA.pem
	I0415 11:16:45.623077  362872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0415 11:16:45.629281  362872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0415 11:16:45.640841  362872 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0415 11:16:45.645649  362872 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0415 11:16:45.645739  362872 kubeadm.go:391] StartCluster: {Name:addons-316289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 C
lusterName:addons-316289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.62 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 11:16:45.645840  362872 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0415 11:16:45.645908  362872 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0415 11:16:45.683204  362872 cri.go:89] found id: ""
	I0415 11:16:45.683289  362872 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0415 11:16:45.694072  362872 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0415 11:16:45.704756  362872 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0415 11:16:45.715233  362872 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0415 11:16:45.715257  362872 kubeadm.go:156] found existing configuration files:
	
	I0415 11:16:45.715324  362872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0415 11:16:45.725330  362872 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0415 11:16:45.725401  362872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0415 11:16:45.736017  362872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0415 11:16:45.746339  362872 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0415 11:16:45.746411  362872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0415 11:16:45.757725  362872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0415 11:16:45.767981  362872 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0415 11:16:45.768060  362872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0415 11:16:45.778450  362872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0415 11:16:45.788783  362872 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0415 11:16:45.788865  362872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0415 11:16:45.799304  362872 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0415 11:16:45.986070  362872 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0415 11:16:56.122166  362872 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0415 11:16:56.122245  362872 kubeadm.go:309] [preflight] Running pre-flight checks
	I0415 11:16:56.122334  362872 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0415 11:16:56.122462  362872 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0415 11:16:56.122609  362872 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0415 11:16:56.122696  362872 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0415 11:16:56.124618  362872 out.go:204]   - Generating certificates and keys ...
	I0415 11:16:56.124696  362872 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0415 11:16:56.124750  362872 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0415 11:16:56.124824  362872 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0415 11:16:56.124881  362872 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0415 11:16:56.124957  362872 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0415 11:16:56.125000  362872 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0415 11:16:56.125078  362872 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0415 11:16:56.125176  362872 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-316289 localhost] and IPs [192.168.39.62 127.0.0.1 ::1]
	I0415 11:16:56.125220  362872 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0415 11:16:56.125370  362872 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-316289 localhost] and IPs [192.168.39.62 127.0.0.1 ::1]
	I0415 11:16:56.125526  362872 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0415 11:16:56.125605  362872 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0415 11:16:56.125646  362872 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0415 11:16:56.125693  362872 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0415 11:16:56.125777  362872 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0415 11:16:56.125957  362872 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0415 11:16:56.126066  362872 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0415 11:16:56.126182  362872 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0415 11:16:56.126271  362872 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0415 11:16:56.126377  362872 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0415 11:16:56.126458  362872 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0415 11:16:56.127593  362872 out.go:204]   - Booting up control plane ...
	I0415 11:16:56.127712  362872 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0415 11:16:56.127823  362872 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0415 11:16:56.127888  362872 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0415 11:16:56.127975  362872 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0415 11:16:56.128049  362872 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0415 11:16:56.128105  362872 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0415 11:16:56.128284  362872 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0415 11:16:56.128404  362872 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.503008 seconds
	I0415 11:16:56.128518  362872 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0415 11:16:56.128663  362872 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0415 11:16:56.128756  362872 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0415 11:16:56.128941  362872 kubeadm.go:309] [mark-control-plane] Marking the node addons-316289 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0415 11:16:56.129007  362872 kubeadm.go:309] [bootstrap-token] Using token: kjeyfa.yexv5i6utfxir8ub
	I0415 11:16:56.130682  362872 out.go:204]   - Configuring RBAC rules ...
	I0415 11:16:56.130768  362872 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0415 11:16:56.130837  362872 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0415 11:16:56.130990  362872 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0415 11:16:56.131129  362872 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0415 11:16:56.131245  362872 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0415 11:16:56.131369  362872 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0415 11:16:56.131535  362872 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0415 11:16:56.131582  362872 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0415 11:16:56.131666  362872 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0415 11:16:56.131677  362872 kubeadm.go:309] 
	I0415 11:16:56.131761  362872 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0415 11:16:56.131771  362872 kubeadm.go:309] 
	I0415 11:16:56.131900  362872 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0415 11:16:56.131915  362872 kubeadm.go:309] 
	I0415 11:16:56.131939  362872 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0415 11:16:56.131990  362872 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0415 11:16:56.132033  362872 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0415 11:16:56.132039  362872 kubeadm.go:309] 
	I0415 11:16:56.132089  362872 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0415 11:16:56.132095  362872 kubeadm.go:309] 
	I0415 11:16:56.132133  362872 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0415 11:16:56.132139  362872 kubeadm.go:309] 
	I0415 11:16:56.132194  362872 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0415 11:16:56.132258  362872 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0415 11:16:56.132358  362872 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0415 11:16:56.132373  362872 kubeadm.go:309] 
	I0415 11:16:56.132506  362872 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0415 11:16:56.132620  362872 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0415 11:16:56.132634  362872 kubeadm.go:309] 
	I0415 11:16:56.132758  362872 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token kjeyfa.yexv5i6utfxir8ub \
	I0415 11:16:56.132880  362872 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8744c5dea057afdc998df40b14b5dc6b216edba4b3db7af6fc31db3f86a01826 \
	I0415 11:16:56.132926  362872 kubeadm.go:309] 	--control-plane 
	I0415 11:16:56.132944  362872 kubeadm.go:309] 
	I0415 11:16:56.133051  362872 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0415 11:16:56.133063  362872 kubeadm.go:309] 
	I0415 11:16:56.133165  362872 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token kjeyfa.yexv5i6utfxir8ub \
	I0415 11:16:56.133319  362872 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8744c5dea057afdc998df40b14b5dc6b216edba4b3db7af6fc31db3f86a01826 
	I0415 11:16:56.133334  362872 cni.go:84] Creating CNI manager for ""
	I0415 11:16:56.133343  362872 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0415 11:16:56.135191  362872 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0415 11:16:56.136792  362872 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0415 11:16:56.152122  362872 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0415 11:16:56.195550  362872 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0415 11:16:56.195632  362872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 11:16:56.195658  362872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-316289 minikube.k8s.io/updated_at=2024_04_15T11_16_56_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=fd91a0c5dbcf69c10661a6c45f66c039ce7b5f02 minikube.k8s.io/name=addons-316289 minikube.k8s.io/primary=true
	I0415 11:16:56.293909  362872 ops.go:34] apiserver oom_adj: -16
	I0415 11:16:56.404651  362872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 11:16:56.905713  362872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 11:16:57.405355  362872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 11:16:57.904922  362872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 11:16:58.405692  362872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 11:16:58.905087  362872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 11:16:59.405138  362872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 11:16:59.905018  362872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 11:17:00.405602  362872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 11:17:00.904742  362872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 11:17:01.404935  362872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 11:17:01.904770  362872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 11:17:02.405775  362872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 11:17:02.905418  362872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 11:17:03.405460  362872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 11:17:03.905682  362872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 11:17:04.404888  362872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 11:17:04.905357  362872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 11:17:05.405707  362872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 11:17:05.905658  362872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 11:17:06.404931  362872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 11:17:06.904880  362872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 11:17:07.405534  362872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 11:17:07.905631  362872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 11:17:08.405465  362872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 11:17:08.904983  362872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 11:17:09.405000  362872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 11:17:09.904817  362872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 11:17:10.016172  362872 kubeadm.go:1107] duration metric: took 13.82061733s to wait for elevateKubeSystemPrivileges
	W0415 11:17:10.016222  362872 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0415 11:17:10.016234  362872 kubeadm.go:393] duration metric: took 24.370502904s to StartCluster
	I0415 11:17:10.016257  362872 settings.go:142] acquiring lock: {Name:mke6e26046a2b79d219b751ce497f1172f6c5788 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 11:17:10.016421  362872 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18644-354432/kubeconfig
	I0415 11:17:10.016888  362872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18644-354432/kubeconfig: {Name:mkef45fe364a4c143ef6e349075e33af87eb719a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 11:17:10.017147  362872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0415 11:17:10.017189  362872 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.62 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0415 11:17:10.019012  362872 out.go:177] * Verifying Kubernetes components...
	I0415 11:17:10.017271  362872 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0415 11:17:10.017447  362872 config.go:182] Loaded profile config "addons-316289": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0415 11:17:10.020627  362872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 11:17:10.019105  362872 addons.go:69] Setting cloud-spanner=true in profile "addons-316289"
	I0415 11:17:10.020706  362872 addons.go:234] Setting addon cloud-spanner=true in "addons-316289"
	I0415 11:17:10.019123  362872 addons.go:69] Setting yakd=true in profile "addons-316289"
	I0415 11:17:10.020774  362872 host.go:66] Checking if "addons-316289" exists ...
	I0415 11:17:10.020808  362872 addons.go:234] Setting addon yakd=true in "addons-316289"
	I0415 11:17:10.019117  362872 addons.go:69] Setting gcp-auth=true in profile "addons-316289"
	I0415 11:17:10.020887  362872 mustload.go:65] Loading cluster: addons-316289
	I0415 11:17:10.019136  362872 addons.go:69] Setting inspektor-gadget=true in profile "addons-316289"
	I0415 11:17:10.020993  362872 addons.go:234] Setting addon inspektor-gadget=true in "addons-316289"
	I0415 11:17:10.021020  362872 host.go:66] Checking if "addons-316289" exists ...
	I0415 11:17:10.019141  362872 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-316289"
	I0415 11:17:10.021080  362872 config.go:182] Loaded profile config "addons-316289": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0415 11:17:10.019146  362872 addons.go:69] Setting storage-provisioner=true in profile "addons-316289"
	I0415 11:17:10.021185  362872 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-316289"
	I0415 11:17:10.021192  362872 addons.go:234] Setting addon storage-provisioner=true in "addons-316289"
	I0415 11:17:10.021220  362872 host.go:66] Checking if "addons-316289" exists ...
	I0415 11:17:10.019146  362872 addons.go:69] Setting default-storageclass=true in profile "addons-316289"
	I0415 11:17:10.021232  362872 host.go:66] Checking if "addons-316289" exists ...
	I0415 11:17:10.021258  362872 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-316289"
	I0415 11:17:10.021281  362872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:17:10.021307  362872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:17:10.021420  362872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:17:10.021440  362872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:17:10.021457  362872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:17:10.021464  362872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:17:10.021569  362872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:17:10.021587  362872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:17:10.021638  362872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:17:10.021647  362872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:17:10.019148  362872 addons.go:69] Setting registry=true in profile "addons-316289"
	I0415 11:17:10.021668  362872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:17:10.021694  362872 addons.go:234] Setting addon registry=true in "addons-316289"
	I0415 11:17:10.019154  362872 addons.go:69] Setting volumesnapshots=true in profile "addons-316289"
	I0415 11:17:10.021736  362872 addons.go:234] Setting addon volumesnapshots=true in "addons-316289"
	I0415 11:17:10.021757  362872 host.go:66] Checking if "addons-316289" exists ...
	I0415 11:17:10.021842  362872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:17:10.019160  362872 addons.go:69] Setting helm-tiller=true in profile "addons-316289"
	I0415 11:17:10.021966  362872 addons.go:234] Setting addon helm-tiller=true in "addons-316289"
	I0415 11:17:10.019163  362872 addons.go:69] Setting metrics-server=true in profile "addons-316289"
	I0415 11:17:10.022034  362872 host.go:66] Checking if "addons-316289" exists ...
	I0415 11:17:10.022046  362872 addons.go:234] Setting addon metrics-server=true in "addons-316289"
	I0415 11:17:10.022079  362872 host.go:66] Checking if "addons-316289" exists ...
	I0415 11:17:10.022127  362872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:17:10.022146  362872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:17:10.019164  362872 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-316289"
	I0415 11:17:10.022389  362872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:17:10.022397  362872 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-316289"
	I0415 11:17:10.022409  362872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:17:10.022411  362872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:17:10.022423  362872 host.go:66] Checking if "addons-316289" exists ...
	I0415 11:17:10.022433  362872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:17:10.022544  362872 host.go:66] Checking if "addons-316289" exists ...
	I0415 11:17:10.019162  362872 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-316289"
	I0415 11:17:10.022637  362872 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-316289"
	I0415 11:17:10.019142  362872 addons.go:69] Setting ingress-dns=true in profile "addons-316289"
	I0415 11:17:10.022670  362872 addons.go:234] Setting addon ingress-dns=true in "addons-316289"
	I0415 11:17:10.019204  362872 addons.go:69] Setting ingress=true in profile "addons-316289"
	I0415 11:17:10.022690  362872 addons.go:234] Setting addon ingress=true in "addons-316289"
	I0415 11:17:10.020856  362872 host.go:66] Checking if "addons-316289" exists ...
	I0415 11:17:10.022921  362872 host.go:66] Checking if "addons-316289" exists ...
	I0415 11:17:10.023002  362872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:17:10.023033  362872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:17:10.023040  362872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:17:10.023051  362872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:17:10.023101  362872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:17:10.023122  362872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:17:10.023262  362872 host.go:66] Checking if "addons-316289" exists ...
	I0415 11:17:10.023693  362872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:17:10.023734  362872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:17:10.042514  362872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45459
	I0415 11:17:10.042518  362872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40663
	I0415 11:17:10.043151  362872 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:17:10.043252  362872 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:17:10.043811  362872 main.go:141] libmachine: Using API Version  1
	I0415 11:17:10.043832  362872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:17:10.043813  362872 main.go:141] libmachine: Using API Version  1
	I0415 11:17:10.043883  362872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:17:10.044305  362872 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:17:10.044354  362872 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:17:10.044985  362872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:17:10.045021  362872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:17:10.045032  362872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:17:10.045043  362872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:17:10.049884  362872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33091
	I0415 11:17:10.050484  362872 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:17:10.051048  362872 main.go:141] libmachine: Using API Version  1
	I0415 11:17:10.051067  362872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:17:10.051469  362872 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:17:10.052114  362872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:17:10.052161  362872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:17:10.053776  362872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42623
	I0415 11:17:10.055959  362872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:17:10.055991  362872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:17:10.056382  362872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:17:10.056422  362872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:17:10.057150  362872 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:17:10.057722  362872 main.go:141] libmachine: Using API Version  1
	I0415 11:17:10.057742  362872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:17:10.058142  362872 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:17:10.058698  362872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:17:10.058725  362872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:17:10.067846  362872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38933
	I0415 11:17:10.068237  362872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46521
	I0415 11:17:10.068389  362872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38983
	I0415 11:17:10.068512  362872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44197
	I0415 11:17:10.068622  362872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46183
	I0415 11:17:10.068703  362872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40585
	I0415 11:17:10.069320  362872 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:17:10.069392  362872 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:17:10.069512  362872 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:17:10.069613  362872 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:17:10.069810  362872 main.go:141] libmachine: Using API Version  1
	I0415 11:17:10.069822  362872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:17:10.070248  362872 main.go:141] libmachine: Using API Version  1
	I0415 11:17:10.070268  362872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:17:10.070298  362872 main.go:141] libmachine: Using API Version  1
	I0415 11:17:10.070316  362872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:17:10.070337  362872 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:17:10.070569  362872 main.go:141] libmachine: (addons-316289) Calling .GetState
	I0415 11:17:10.070608  362872 main.go:141] libmachine: Using API Version  1
	I0415 11:17:10.070627  362872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:17:10.070754  362872 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:17:10.070940  362872 main.go:141] libmachine: (addons-316289) Calling .GetState
	I0415 11:17:10.071122  362872 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:17:10.071818  362872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:17:10.071862  362872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:17:10.071902  362872 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:17:10.072109  362872 main.go:141] libmachine: (addons-316289) Calling .GetState
	I0415 11:17:10.072683  362872 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:17:10.072776  362872 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:17:10.073284  362872 main.go:141] libmachine: Using API Version  1
	I0415 11:17:10.073317  362872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:17:10.075353  362872 addons.go:234] Setting addon default-storageclass=true in "addons-316289"
	I0415 11:17:10.075390  362872 host.go:66] Checking if "addons-316289" exists ...
	I0415 11:17:10.075765  362872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:17:10.075802  362872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:17:10.076043  362872 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:17:10.076118  362872 main.go:141] libmachine: (addons-316289) Calling .DriverName
	I0415 11:17:10.076259  362872 main.go:141] libmachine: Using API Version  1
	I0415 11:17:10.076271  362872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:17:10.076358  362872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41081
	I0415 11:17:10.076879  362872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:17:10.076898  362872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:17:10.076943  362872 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:17:10.077081  362872 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:17:10.079016  362872 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0415 11:17:10.077488  362872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:17:10.077893  362872 host.go:66] Checking if "addons-316289" exists ...
	I0415 11:17:10.080717  362872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:17:10.080743  362872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:17:10.081043  362872 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0415 11:17:10.081065  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0415 11:17:10.081086  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHHostname
	I0415 11:17:10.081161  362872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:17:10.082480  362872 main.go:141] libmachine: Using API Version  1
	I0415 11:17:10.082550  362872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:17:10.082983  362872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41575
	I0415 11:17:10.083251  362872 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:17:10.083874  362872 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:17:10.083910  362872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:17:10.083944  362872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:17:10.084521  362872 main.go:141] libmachine: Using API Version  1
	I0415 11:17:10.084549  362872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:17:10.084573  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:17:10.084961  362872 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:17:10.084996  362872 main.go:141] libmachine: (addons-316289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:92:2f", ip: ""} in network mk-addons-316289: {Iface:virbr1 ExpiryTime:2024-04-15 12:16:28 +0000 UTC Type:0 Mac:52:54:00:f9:92:2f Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-316289 Clientid:01:52:54:00:f9:92:2f}
	I0415 11:17:10.085016  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined IP address 192.168.39.62 and MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:17:10.085152  362872 main.go:141] libmachine: (addons-316289) Calling .GetState
	I0415 11:17:10.085265  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHPort
	I0415 11:17:10.085480  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHKeyPath
	I0415 11:17:10.085688  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHUsername
	I0415 11:17:10.085851  362872 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18644-354432/.minikube/machines/addons-316289/id_rsa Username:docker}
	I0415 11:17:10.087830  362872 main.go:141] libmachine: (addons-316289) Calling .DriverName
	I0415 11:17:10.089968  362872 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 11:17:10.089822  362872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41987
	I0415 11:17:10.091505  362872 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 11:17:10.091521  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0415 11:17:10.091542  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHHostname
	I0415 11:17:10.091989  362872 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:17:10.092603  362872 main.go:141] libmachine: Using API Version  1
	I0415 11:17:10.092622  362872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:17:10.093036  362872 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:17:10.093622  362872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:17:10.093649  362872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:17:10.094622  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:17:10.094977  362872 main.go:141] libmachine: (addons-316289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:92:2f", ip: ""} in network mk-addons-316289: {Iface:virbr1 ExpiryTime:2024-04-15 12:16:28 +0000 UTC Type:0 Mac:52:54:00:f9:92:2f Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-316289 Clientid:01:52:54:00:f9:92:2f}
	I0415 11:17:10.094997  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined IP address 192.168.39.62 and MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:17:10.095267  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHPort
	I0415 11:17:10.095471  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHKeyPath
	I0415 11:17:10.095707  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHUsername
	I0415 11:17:10.095901  362872 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18644-354432/.minikube/machines/addons-316289/id_rsa Username:docker}
	I0415 11:17:10.106206  362872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43041
	I0415 11:17:10.106777  362872 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:17:10.107412  362872 main.go:141] libmachine: Using API Version  1
	I0415 11:17:10.107439  362872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:17:10.107879  362872 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:17:10.108487  362872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:17:10.108526  362872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:17:10.109921  362872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41981
	I0415 11:17:10.110686  362872 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:17:10.111418  362872 main.go:141] libmachine: Using API Version  1
	I0415 11:17:10.111446  362872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:17:10.111879  362872 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:17:10.112487  362872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:17:10.112513  362872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:17:10.114897  362872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39451
	I0415 11:17:10.115257  362872 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:17:10.115367  362872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46197
	I0415 11:17:10.115818  362872 main.go:141] libmachine: Using API Version  1
	I0415 11:17:10.115843  362872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:17:10.115944  362872 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:17:10.115966  362872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45465
	I0415 11:17:10.116205  362872 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:17:10.116401  362872 main.go:141] libmachine: (addons-316289) Calling .DriverName
	I0415 11:17:10.116486  362872 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:17:10.116604  362872 main.go:141] libmachine: Using API Version  1
	I0415 11:17:10.116648  362872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:17:10.116978  362872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35419
	I0415 11:17:10.117073  362872 main.go:141] libmachine: Using API Version  1
	I0415 11:17:10.117092  362872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:17:10.117129  362872 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:17:10.117312  362872 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:17:10.117394  362872 main.go:141] libmachine: (addons-316289) Calling .GetState
	I0415 11:17:10.117465  362872 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:17:10.117595  362872 main.go:141] libmachine: (addons-316289) Calling .GetState
	I0415 11:17:10.117784  362872 main.go:141] libmachine: Using API Version  1
	I0415 11:17:10.117809  362872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:17:10.118129  362872 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:17:10.118263  362872 main.go:141] libmachine: (addons-316289) Calling .GetState
	I0415 11:17:10.120073  362872 main.go:141] libmachine: (addons-316289) Calling .DriverName
	I0415 11:17:10.120137  362872 main.go:141] libmachine: (addons-316289) Calling .DriverName
	I0415 11:17:10.120383  362872 main.go:141] libmachine: (addons-316289) Calling .DriverName
	I0415 11:17:10.122201  362872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0415 11:17:10.123782  362872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0415 11:17:10.125073  362872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0415 11:17:10.123712  362872 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0415 11:17:10.123751  362872 out.go:177]   - Using image docker.io/registry:2.8.3
	I0415 11:17:10.126133  362872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38661
	I0415 11:17:10.126422  362872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40349
	I0415 11:17:10.126474  362872 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0415 11:17:10.128291  362872 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:17:10.128341  362872 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:17:10.128769  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0415 11:17:10.131032  362872 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0415 11:17:10.129972  362872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0415 11:17:10.130005  362872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43239
	I0415 11:17:10.130025  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHHostname
	I0415 11:17:10.130497  362872 main.go:141] libmachine: Using API Version  1
	I0415 11:17:10.130622  362872 main.go:141] libmachine: Using API Version  1
	I0415 11:17:10.131847  362872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46425
	I0415 11:17:10.132366  362872 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0415 11:17:10.133769  362872 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:17:10.132382  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0415 11:17:10.133807  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHHostname
	I0415 11:17:10.134991  362872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0415 11:17:10.136211  362872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0415 11:17:10.137426  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:17:10.135288  362872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40013
	I0415 11:17:10.136481  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:17:10.133328  362872 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:17:10.132400  362872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:17:10.134446  362872 main.go:141] libmachine: Using API Version  1
	I0415 11:17:10.132415  362872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:17:10.136702  362872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41897
	I0415 11:17:10.137218  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHPort
	I0415 11:17:10.137813  362872 main.go:141] libmachine: (addons-316289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:92:2f", ip: ""} in network mk-addons-316289: {Iface:virbr1 ExpiryTime:2024-04-15 12:16:28 +0000 UTC Type:0 Mac:52:54:00:f9:92:2f Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-316289 Clientid:01:52:54:00:f9:92:2f}
	I0415 11:17:10.138594  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined IP address 192.168.39.62 and MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:17:10.138617  362872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:17:10.138665  362872 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0415 11:17:10.138000  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHPort
	I0415 11:17:10.138784  362872 main.go:141] libmachine: (addons-316289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:92:2f", ip: ""} in network mk-addons-316289: {Iface:virbr1 ExpiryTime:2024-04-15 12:16:28 +0000 UTC Type:0 Mac:52:54:00:f9:92:2f Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-316289 Clientid:01:52:54:00:f9:92:2f}
	I0415 11:17:10.139265  362872 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:17:10.140354  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined IP address 192.168.39.62 and MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:17:10.139283  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHKeyPath
	I0415 11:17:10.139444  362872 main.go:141] libmachine: Using API Version  1
	I0415 11:17:10.140419  362872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:17:10.139532  362872 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:17:10.139590  362872 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:17:10.139708  362872 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:17:10.140503  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHKeyPath
	I0415 11:17:10.140003  362872 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:17:10.140291  362872 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0415 11:17:10.141793  362872 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0415 11:17:10.140559  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHUsername
	I0415 11:17:10.141813  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0415 11:17:10.141834  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHHostname
	I0415 11:17:10.140587  362872 main.go:141] libmachine: (addons-316289) Calling .GetState
	I0415 11:17:10.140691  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHUsername
	I0415 11:17:10.140725  362872 main.go:141] libmachine: (addons-316289) Calling .GetState
	I0415 11:17:10.141037  362872 main.go:141] libmachine: Using API Version  1
	I0415 11:17:10.141919  362872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:17:10.141198  362872 main.go:141] libmachine: Using API Version  1
	I0415 11:17:10.141972  362872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:17:10.141216  362872 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:17:10.141619  362872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:17:10.142012  362872 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18644-354432/.minikube/machines/addons-316289/id_rsa Username:docker}
	I0415 11:17:10.142041  362872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:17:10.142022  362872 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18644-354432/.minikube/machines/addons-316289/id_rsa Username:docker}
	I0415 11:17:10.142451  362872 main.go:141] libmachine: (addons-316289) Calling .GetState
	I0415 11:17:10.143046  362872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41919
	I0415 11:17:10.143209  362872 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:17:10.143447  362872 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:17:10.143541  362872 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:17:10.144073  362872 main.go:141] libmachine: (addons-316289) Calling .GetState
	I0415 11:17:10.144144  362872 main.go:141] libmachine: (addons-316289) Calling .DriverName
	I0415 11:17:10.144193  362872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:17:10.144238  362872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:17:10.144241  362872 main.go:141] libmachine: (addons-316289) Calling .DriverName
	I0415 11:17:10.146458  362872 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0415 11:17:10.145261  362872 main.go:141] libmachine: (addons-316289) Calling .DriverName
	I0415 11:17:10.145827  362872 main.go:141] libmachine: Using API Version  1
	I0415 11:17:10.146329  362872 main.go:141] libmachine: (addons-316289) Calling .DriverName
	I0415 11:17:10.147257  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:17:10.147694  362872 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0415 11:17:10.148611  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0415 11:17:10.148628  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHHostname
	I0415 11:17:10.148540  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHPort
	I0415 11:17:10.148692  362872 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0415 11:17:10.150426  362872 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0415 11:17:10.150440  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0415 11:17:10.150456  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHHostname
	I0415 11:17:10.148877  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHKeyPath
	I0415 11:17:10.148917  362872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:17:10.149010  362872 main.go:141] libmachine: (addons-316289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:92:2f", ip: ""} in network mk-addons-316289: {Iface:virbr1 ExpiryTime:2024-04-15 12:16:28 +0000 UTC Type:0 Mac:52:54:00:f9:92:2f Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-316289 Clientid:01:52:54:00:f9:92:2f}
	I0415 11:17:10.150560  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined IP address 192.168.39.62 and MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:17:10.152054  362872 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0415 11:17:10.150302  362872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34227
	I0415 11:17:10.151201  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHUsername
	I0415 11:17:10.151996  362872 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:17:10.152853  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:17:10.153293  362872 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0415 11:17:10.153305  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0415 11:17:10.153323  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHHostname
	I0415 11:17:10.153367  362872 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0415 11:17:10.155486  362872 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0415 11:17:10.155503  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0415 11:17:10.155521  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHHostname
	I0415 11:17:10.153478  362872 main.go:141] libmachine: (addons-316289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:92:2f", ip: ""} in network mk-addons-316289: {Iface:virbr1 ExpiryTime:2024-04-15 12:16:28 +0000 UTC Type:0 Mac:52:54:00:f9:92:2f Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-316289 Clientid:01:52:54:00:f9:92:2f}
	I0415 11:17:10.155578  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined IP address 192.168.39.62 and MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:17:10.153707  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHPort
	I0415 11:17:10.154483  362872 main.go:141] libmachine: (addons-316289) Calling .GetState
	I0415 11:17:10.154500  362872 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18644-354432/.minikube/machines/addons-316289/id_rsa Username:docker}
	I0415 11:17:10.156621  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHKeyPath
	I0415 11:17:10.156840  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHUsername
	I0415 11:17:10.157136  362872 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18644-354432/.minikube/machines/addons-316289/id_rsa Username:docker}
	I0415 11:17:10.157610  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:17:10.158178  362872 main.go:141] libmachine: (addons-316289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:92:2f", ip: ""} in network mk-addons-316289: {Iface:virbr1 ExpiryTime:2024-04-15 12:16:28 +0000 UTC Type:0 Mac:52:54:00:f9:92:2f Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-316289 Clientid:01:52:54:00:f9:92:2f}
	I0415 11:17:10.158199  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined IP address 192.168.39.62 and MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:17:10.158242  362872 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:17:10.158344  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHPort
	I0415 11:17:10.158759  362872 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-316289"
	I0415 11:17:10.158804  362872 host.go:66] Checking if "addons-316289" exists ...
	I0415 11:17:10.158939  362872 main.go:141] libmachine: Using API Version  1
	I0415 11:17:10.158956  362872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:17:10.159173  362872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:17:10.159214  362872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:17:10.159676  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHKeyPath
	I0415 11:17:10.159741  362872 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:17:10.160153  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:17:10.160194  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHUsername
	I0415 11:17:10.160242  362872 main.go:141] libmachine: (addons-316289) Calling .GetState
	I0415 11:17:10.161069  362872 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18644-354432/.minikube/machines/addons-316289/id_rsa Username:docker}
	I0415 11:17:10.161337  362872 main.go:141] libmachine: (addons-316289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:92:2f", ip: ""} in network mk-addons-316289: {Iface:virbr1 ExpiryTime:2024-04-15 12:16:28 +0000 UTC Type:0 Mac:52:54:00:f9:92:2f Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-316289 Clientid:01:52:54:00:f9:92:2f}
	I0415 11:17:10.161356  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined IP address 192.168.39.62 and MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:17:10.161385  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHPort
	I0415 11:17:10.161744  362872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34553
	I0415 11:17:10.161874  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHKeyPath
	I0415 11:17:10.162353  362872 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:17:10.162450  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHUsername
	I0415 11:17:10.163144  362872 main.go:141] libmachine: Using API Version  1
	I0415 11:17:10.163165  362872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:17:10.163171  362872 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18644-354432/.minikube/machines/addons-316289/id_rsa Username:docker}
	I0415 11:17:10.163530  362872 main.go:141] libmachine: (addons-316289) Calling .DriverName
	I0415 11:17:10.165995  362872 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0415 11:17:10.164528  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:17:10.164681  362872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46739
	I0415 11:17:10.164924  362872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42009
	I0415 11:17:10.165180  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHPort
	I0415 11:17:10.165189  362872 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:17:10.166100  362872 main.go:141] libmachine: (addons-316289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:92:2f", ip: ""} in network mk-addons-316289: {Iface:virbr1 ExpiryTime:2024-04-15 12:16:28 +0000 UTC Type:0 Mac:52:54:00:f9:92:2f Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-316289 Clientid:01:52:54:00:f9:92:2f}
	I0415 11:17:10.166482  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHKeyPath
	I0415 11:17:10.166592  362872 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:17:10.166673  362872 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:17:10.167911  362872 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0415 11:17:10.168008  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined IP address 192.168.39.62 and MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:17:10.168202  362872 main.go:141] libmachine: Using API Version  1
	I0415 11:17:10.168227  362872 main.go:141] libmachine: (addons-316289) Calling .GetState
	I0415 11:17:10.168335  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHUsername
	I0415 11:17:10.168586  362872 main.go:141] libmachine: Using API Version  1
	I0415 11:17:10.169375  362872 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0415 11:17:10.170969  362872 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0415 11:17:10.170990  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0415 11:17:10.171008  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHHostname
	I0415 11:17:10.169406  362872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:17:10.169422  362872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:17:10.169534  362872 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18644-354432/.minikube/machines/addons-316289/id_rsa Username:docker}
	I0415 11:17:10.171420  362872 main.go:141] libmachine: (addons-316289) Calling .DriverName
	I0415 11:17:10.171496  362872 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:17:10.172000  362872 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:17:10.173348  362872 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0415 11:17:10.172060  362872 main.go:141] libmachine: (addons-316289) Calling .GetState
	I0415 11:17:10.172205  362872 main.go:141] libmachine: (addons-316289) Calling .GetState
	I0415 11:17:10.174940  362872 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0415 11:17:10.174959  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0415 11:17:10.174976  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHHostname
	I0415 11:17:10.174587  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:17:10.175065  362872 main.go:141] libmachine: (addons-316289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:92:2f", ip: ""} in network mk-addons-316289: {Iface:virbr1 ExpiryTime:2024-04-15 12:16:28 +0000 UTC Type:0 Mac:52:54:00:f9:92:2f Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-316289 Clientid:01:52:54:00:f9:92:2f}
	I0415 11:17:10.175085  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined IP address 192.168.39.62 and MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:17:10.175273  362872 main.go:141] libmachine: (addons-316289) Calling .DriverName
	I0415 11:17:10.175338  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHPort
	I0415 11:17:10.175491  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHKeyPath
	I0415 11:17:10.175564  362872 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0415 11:17:10.175582  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0415 11:17:10.175601  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHHostname
	I0415 11:17:10.175685  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHUsername
	I0415 11:17:10.175852  362872 main.go:141] libmachine: (addons-316289) Calling .DriverName
	I0415 11:17:10.175953  362872 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18644-354432/.minikube/machines/addons-316289/id_rsa Username:docker}
	I0415 11:17:10.178396  362872 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0415 11:17:10.179885  362872 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0415 11:17:10.179909  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0415 11:17:10.179928  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHHostname
	I0415 11:17:10.178469  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:17:10.180002  362872 main.go:141] libmachine: (addons-316289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:92:2f", ip: ""} in network mk-addons-316289: {Iface:virbr1 ExpiryTime:2024-04-15 12:16:28 +0000 UTC Type:0 Mac:52:54:00:f9:92:2f Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-316289 Clientid:01:52:54:00:f9:92:2f}
	I0415 11:17:10.180026  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined IP address 192.168.39.62 and MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:17:10.178876  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:17:10.180047  362872 main.go:141] libmachine: (addons-316289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:92:2f", ip: ""} in network mk-addons-316289: {Iface:virbr1 ExpiryTime:2024-04-15 12:16:28 +0000 UTC Type:0 Mac:52:54:00:f9:92:2f Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-316289 Clientid:01:52:54:00:f9:92:2f}
	I0415 11:17:10.180063  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined IP address 192.168.39.62 and MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:17:10.179138  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHPort
	I0415 11:17:10.179540  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHPort
	I0415 11:17:10.180245  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHKeyPath
	I0415 11:17:10.180291  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHKeyPath
	I0415 11:17:10.180451  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHUsername
	I0415 11:17:10.180476  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHUsername
	I0415 11:17:10.180662  362872 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18644-354432/.minikube/machines/addons-316289/id_rsa Username:docker}
	I0415 11:17:10.180960  362872 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18644-354432/.minikube/machines/addons-316289/id_rsa Username:docker}
	I0415 11:17:10.182578  362872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42481
	I0415 11:17:10.182880  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:17:10.182974  362872 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:17:10.183258  362872 main.go:141] libmachine: (addons-316289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:92:2f", ip: ""} in network mk-addons-316289: {Iface:virbr1 ExpiryTime:2024-04-15 12:16:28 +0000 UTC Type:0 Mac:52:54:00:f9:92:2f Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-316289 Clientid:01:52:54:00:f9:92:2f}
	I0415 11:17:10.183280  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined IP address 192.168.39.62 and MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:17:10.183420  362872 main.go:141] libmachine: Using API Version  1
	I0415 11:17:10.183433  362872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:17:10.183500  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHPort
	I0415 11:17:10.183616  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHKeyPath
	I0415 11:17:10.183795  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHUsername
	I0415 11:17:10.183847  362872 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:17:10.183935  362872 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18644-354432/.minikube/machines/addons-316289/id_rsa Username:docker}
	I0415 11:17:10.184592  362872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:17:10.184630  362872 main.go:141] libmachine: Launching plugin server for driver kvm2
	W0415 11:17:10.192554  362872 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:56550->192.168.39.62:22: read: connection reset by peer
	I0415 11:17:10.192590  362872 retry.go:31] will retry after 354.033168ms: ssh: handshake failed: read tcp 192.168.39.1:56550->192.168.39.62:22: read: connection reset by peer
	I0415 11:17:10.219280  362872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43021
	I0415 11:17:10.219747  362872 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:17:10.220301  362872 main.go:141] libmachine: Using API Version  1
	I0415 11:17:10.220330  362872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:17:10.220718  362872 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:17:10.220949  362872 main.go:141] libmachine: (addons-316289) Calling .GetState
	I0415 11:17:10.222463  362872 main.go:141] libmachine: (addons-316289) Calling .DriverName
	I0415 11:17:10.224708  362872 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0415 11:17:10.225785  362872 out.go:177]   - Using image docker.io/busybox:stable
	I0415 11:17:10.227135  362872 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0415 11:17:10.227158  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0415 11:17:10.227182  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHHostname
	I0415 11:17:10.229860  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:17:10.230211  362872 main.go:141] libmachine: (addons-316289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:92:2f", ip: ""} in network mk-addons-316289: {Iface:virbr1 ExpiryTime:2024-04-15 12:16:28 +0000 UTC Type:0 Mac:52:54:00:f9:92:2f Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-316289 Clientid:01:52:54:00:f9:92:2f}
	I0415 11:17:10.230238  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined IP address 192.168.39.62 and MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:17:10.230392  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHPort
	I0415 11:17:10.230577  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHKeyPath
	I0415 11:17:10.230738  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHUsername
	I0415 11:17:10.230891  362872 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18644-354432/.minikube/machines/addons-316289/id_rsa Username:docker}
	I0415 11:17:10.660731  362872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 11:17:10.754575  362872 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0415 11:17:10.754613  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0415 11:17:10.836015  362872 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0415 11:17:10.836058  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0415 11:17:10.905946  362872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0415 11:17:10.916080  362872 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0415 11:17:10.916103  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0415 11:17:11.055087  362872 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0415 11:17:11.055127  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0415 11:17:11.059111  362872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0415 11:17:11.066524  362872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0415 11:17:11.080350  362872 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0415 11:17:11.080383  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0415 11:17:11.100302  362872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0415 11:17:11.123989  362872 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0415 11:17:11.124027  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0415 11:17:11.148282  362872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0415 11:17:11.150403  362872 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0415 11:17:11.150424  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0415 11:17:11.197036  362872 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.179853769s)
	I0415 11:17:11.197142  362872 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.176480106s)
	I0415 11:17:11.197232  362872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 11:17:11.197267  362872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0415 11:17:11.220949  362872 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0415 11:17:11.220991  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0415 11:17:11.230104  362872 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0415 11:17:11.230141  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0415 11:17:11.304345  362872 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0415 11:17:11.304381  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0415 11:17:11.486500  362872 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0415 11:17:11.486541  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0415 11:17:11.539562  362872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0415 11:17:11.553637  362872 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0415 11:17:11.553666  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0415 11:17:11.571196  362872 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0415 11:17:11.571223  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0415 11:17:11.644388  362872 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0415 11:17:11.644419  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0415 11:17:11.690275  362872 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0415 11:17:11.690310  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0415 11:17:11.750723  362872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0415 11:17:11.779248  362872 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0415 11:17:11.779290  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0415 11:17:11.818555  362872 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0415 11:17:11.818584  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0415 11:17:11.819608  362872 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0415 11:17:11.819627  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0415 11:17:11.862176  362872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0415 11:17:11.908110  362872 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0415 11:17:11.908154  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0415 11:17:11.941418  362872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0415 11:17:11.991326  362872 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0415 11:17:11.991357  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0415 11:17:12.005411  362872 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0415 11:17:12.005440  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0415 11:17:12.020050  362872 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0415 11:17:12.020085  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0415 11:17:12.093656  362872 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0415 11:17:12.093686  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0415 11:17:12.201128  362872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0415 11:17:12.323500  362872 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0415 11:17:12.323532  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0415 11:17:12.330730  362872 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0415 11:17:12.330757  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0415 11:17:12.468004  362872 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0415 11:17:12.468047  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0415 11:17:12.568780  362872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0415 11:17:12.639132  362872 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0415 11:17:12.639176  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0415 11:17:12.866079  362872 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0415 11:17:12.866110  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0415 11:17:12.935285  362872 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0415 11:17:12.935318  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0415 11:17:13.094608  362872 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0415 11:17:13.094633  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0415 11:17:13.127599  362872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0415 11:17:13.243824  362872 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0415 11:17:13.243859  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0415 11:17:13.484559  362872 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0415 11:17:13.484584  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0415 11:17:13.838055  362872 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0415 11:17:13.838086  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0415 11:17:14.031974  362872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0415 11:17:16.274867  362872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.61408744s)
	I0415 11:17:16.274932  362872 main.go:141] libmachine: Making call to close driver server
	I0415 11:17:16.274944  362872 main.go:141] libmachine: (addons-316289) Calling .Close
	I0415 11:17:16.275294  362872 main.go:141] libmachine: (addons-316289) DBG | Closing plugin on server side
	I0415 11:17:16.275353  362872 main.go:141] libmachine: Successfully made call to close driver server
	I0415 11:17:16.275365  362872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 11:17:16.275380  362872 main.go:141] libmachine: Making call to close driver server
	I0415 11:17:16.275392  362872 main.go:141] libmachine: (addons-316289) Calling .Close
	I0415 11:17:16.275710  362872 main.go:141] libmachine: Successfully made call to close driver server
	I0415 11:17:16.275730  362872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 11:17:16.932709  362872 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0415 11:17:16.932772  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHHostname
	I0415 11:17:16.936123  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:17:16.936645  362872 main.go:141] libmachine: (addons-316289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:92:2f", ip: ""} in network mk-addons-316289: {Iface:virbr1 ExpiryTime:2024-04-15 12:16:28 +0000 UTC Type:0 Mac:52:54:00:f9:92:2f Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-316289 Clientid:01:52:54:00:f9:92:2f}
	I0415 11:17:16.936681  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined IP address 192.168.39.62 and MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:17:16.936912  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHPort
	I0415 11:17:16.937142  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHKeyPath
	I0415 11:17:16.937323  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHUsername
	I0415 11:17:16.937487  362872 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18644-354432/.minikube/machines/addons-316289/id_rsa Username:docker}
	I0415 11:17:17.383013  362872 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0415 11:17:17.455214  362872 addons.go:234] Setting addon gcp-auth=true in "addons-316289"
	I0415 11:17:17.455332  362872 host.go:66] Checking if "addons-316289" exists ...
	I0415 11:17:17.455842  362872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:17:17.455890  362872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:17:17.472851  362872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37867
	I0415 11:17:17.473388  362872 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:17:17.474012  362872 main.go:141] libmachine: Using API Version  1
	I0415 11:17:17.474048  362872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:17:17.474469  362872 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:17:17.475028  362872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:17:17.475064  362872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:17:17.490488  362872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38463
	I0415 11:17:17.491037  362872 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:17:17.491614  362872 main.go:141] libmachine: Using API Version  1
	I0415 11:17:17.491658  362872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:17:17.492016  362872 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:17:17.492206  362872 main.go:141] libmachine: (addons-316289) Calling .GetState
	I0415 11:17:17.494034  362872 main.go:141] libmachine: (addons-316289) Calling .DriverName
	I0415 11:17:17.494298  362872 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0415 11:17:17.494323  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHHostname
	I0415 11:17:17.497508  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:17:17.497934  362872 main.go:141] libmachine: (addons-316289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:92:2f", ip: ""} in network mk-addons-316289: {Iface:virbr1 ExpiryTime:2024-04-15 12:16:28 +0000 UTC Type:0 Mac:52:54:00:f9:92:2f Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-316289 Clientid:01:52:54:00:f9:92:2f}
	I0415 11:17:17.497989  362872 main.go:141] libmachine: (addons-316289) DBG | domain addons-316289 has defined IP address 192.168.39.62 and MAC address 52:54:00:f9:92:2f in network mk-addons-316289
	I0415 11:17:17.498153  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHPort
	I0415 11:17:17.498339  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHKeyPath
	I0415 11:17:17.498535  362872 main.go:141] libmachine: (addons-316289) Calling .GetSSHUsername
	I0415 11:17:17.498690  362872 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18644-354432/.minikube/machines/addons-316289/id_rsa Username:docker}
	I0415 11:17:19.730509  362872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.671355081s)
	I0415 11:17:19.730557  362872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.663998471s)
	I0415 11:17:19.730580  362872 main.go:141] libmachine: Making call to close driver server
	I0415 11:17:19.730594  362872 main.go:141] libmachine: (addons-316289) Calling .Close
	I0415 11:17:19.730607  362872 main.go:141] libmachine: Making call to close driver server
	I0415 11:17:19.730612  362872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.630263207s)
	I0415 11:17:19.730650  362872 main.go:141] libmachine: Making call to close driver server
	I0415 11:17:19.730652  362872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.582340208s)
	I0415 11:17:19.730668  362872 main.go:141] libmachine: (addons-316289) Calling .Close
	I0415 11:17:19.730677  362872 main.go:141] libmachine: Making call to close driver server
	I0415 11:17:19.730686  362872 main.go:141] libmachine: (addons-316289) Calling .Close
	I0415 11:17:19.730620  362872 main.go:141] libmachine: (addons-316289) Calling .Close
	I0415 11:17:19.730743  362872 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.533452971s)
	I0415 11:17:19.730758  362872 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.533507931s)
	I0415 11:17:19.730766  362872 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0415 11:17:19.730774  362872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.191182513s)
	I0415 11:17:19.730849  362872 main.go:141] libmachine: Making call to close driver server
	I0415 11:17:19.730859  362872 main.go:141] libmachine: (addons-316289) Calling .Close
	I0415 11:17:19.730934  362872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.980178902s)
	I0415 11:17:19.730951  362872 main.go:141] libmachine: Making call to close driver server
	I0415 11:17:19.730962  362872 main.go:141] libmachine: (addons-316289) Calling .Close
	I0415 11:17:19.731034  362872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.868826665s)
	I0415 11:17:19.731049  362872 main.go:141] libmachine: Making call to close driver server
	I0415 11:17:19.731057  362872 main.go:141] libmachine: (addons-316289) Calling .Close
	I0415 11:17:19.731159  362872 main.go:141] libmachine: (addons-316289) DBG | Closing plugin on server side
	I0415 11:17:19.731165  362872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.789716045s)
	I0415 11:17:19.731185  362872 main.go:141] libmachine: Making call to close driver server
	I0415 11:17:19.731186  362872 main.go:141] libmachine: (addons-316289) DBG | Closing plugin on server side
	I0415 11:17:19.731194  362872 main.go:141] libmachine: (addons-316289) Calling .Close
	I0415 11:17:19.731207  362872 main.go:141] libmachine: Successfully made call to close driver server
	I0415 11:17:19.731214  362872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 11:17:19.731222  362872 main.go:141] libmachine: Making call to close driver server
	I0415 11:17:19.731223  362872 main.go:141] libmachine: Successfully made call to close driver server
	I0415 11:17:19.731229  362872 main.go:141] libmachine: (addons-316289) Calling .Close
	I0415 11:17:19.731231  362872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 11:17:19.731239  362872 main.go:141] libmachine: Making call to close driver server
	I0415 11:17:19.731246  362872 main.go:141] libmachine: (addons-316289) Calling .Close
	I0415 11:17:19.731273  362872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.530109078s)
	I0415 11:17:19.731299  362872 main.go:141] libmachine: Making call to close driver server
	I0415 11:17:19.731304  362872 main.go:141] libmachine: Successfully made call to close driver server
	I0415 11:17:19.731307  362872 main.go:141] libmachine: (addons-316289) Calling .Close
	I0415 11:17:19.731312  362872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 11:17:19.731320  362872 main.go:141] libmachine: Making call to close driver server
	I0415 11:17:19.731326  362872 main.go:141] libmachine: (addons-316289) Calling .Close
	I0415 11:17:19.731164  362872 main.go:141] libmachine: (addons-316289) DBG | Closing plugin on server side
	I0415 11:17:19.731471  362872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.16264884s)
	W0415 11:17:19.731503  362872 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0415 11:17:19.731526  362872 retry.go:31] will retry after 138.549996ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0415 11:17:19.731593  362872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.603956919s)
	I0415 11:17:19.731610  362872 main.go:141] libmachine: Making call to close driver server
	I0415 11:17:19.731619  362872 main.go:141] libmachine: (addons-316289) Calling .Close
	I0415 11:17:19.732026  362872 main.go:141] libmachine: (addons-316289) DBG | Closing plugin on server side
	I0415 11:17:19.732063  362872 main.go:141] libmachine: (addons-316289) DBG | Closing plugin on server side
	I0415 11:17:19.732085  362872 main.go:141] libmachine: Successfully made call to close driver server
	I0415 11:17:19.732094  362872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 11:17:19.732101  362872 main.go:141] libmachine: Making call to close driver server
	I0415 11:17:19.732108  362872 main.go:141] libmachine: (addons-316289) Calling .Close
	I0415 11:17:19.732101  362872 node_ready.go:35] waiting up to 6m0s for node "addons-316289" to be "Ready" ...
	I0415 11:17:19.732158  362872 main.go:141] libmachine: (addons-316289) DBG | Closing plugin on server side
	I0415 11:17:19.732178  362872 main.go:141] libmachine: Successfully made call to close driver server
	I0415 11:17:19.732185  362872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 11:17:19.732334  362872 main.go:141] libmachine: Successfully made call to close driver server
	I0415 11:17:19.732347  362872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 11:17:19.732357  362872 main.go:141] libmachine: Making call to close driver server
	I0415 11:17:19.732365  362872 main.go:141] libmachine: (addons-316289) Calling .Close
	I0415 11:17:19.733014  362872 main.go:141] libmachine: (addons-316289) DBG | Closing plugin on server side
	I0415 11:17:19.733041  362872 main.go:141] libmachine: Successfully made call to close driver server
	I0415 11:17:19.733048  362872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 11:17:19.733183  362872 main.go:141] libmachine: (addons-316289) DBG | Closing plugin on server side
	I0415 11:17:19.733220  362872 main.go:141] libmachine: Successfully made call to close driver server
	I0415 11:17:19.733226  362872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 11:17:19.733234  362872 addons.go:470] Verifying addon metrics-server=true in "addons-316289"
	I0415 11:17:19.733385  362872 main.go:141] libmachine: (addons-316289) DBG | Closing plugin on server side
	I0415 11:17:19.733403  362872 main.go:141] libmachine: (addons-316289) DBG | Closing plugin on server side
	I0415 11:17:19.733469  362872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.827492268s)
	I0415 11:17:19.733489  362872 main.go:141] libmachine: Making call to close driver server
	I0415 11:17:19.733496  362872 main.go:141] libmachine: (addons-316289) Calling .Close
	I0415 11:17:19.733550  362872 main.go:141] libmachine: Successfully made call to close driver server
	I0415 11:17:19.733557  362872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 11:17:19.733564  362872 main.go:141] libmachine: Making call to close driver server
	I0415 11:17:19.733570  362872 main.go:141] libmachine: (addons-316289) Calling .Close
	I0415 11:17:19.733857  362872 main.go:141] libmachine: (addons-316289) DBG | Closing plugin on server side
	I0415 11:17:19.733877  362872 main.go:141] libmachine: (addons-316289) DBG | Closing plugin on server side
	I0415 11:17:19.733886  362872 main.go:141] libmachine: Successfully made call to close driver server
	I0415 11:17:19.733894  362872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 11:17:19.733904  362872 main.go:141] libmachine: Making call to close driver server
	I0415 11:17:19.733907  362872 main.go:141] libmachine: Successfully made call to close driver server
	I0415 11:17:19.733912  362872 main.go:141] libmachine: (addons-316289) Calling .Close
	I0415 11:17:19.733916  362872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 11:17:19.733925  362872 addons.go:470] Verifying addon registry=true in "addons-316289"
	I0415 11:17:19.733962  362872 main.go:141] libmachine: Successfully made call to close driver server
	I0415 11:17:19.733968  362872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 11:17:19.736289  362872 out.go:177] * Verifying registry addon...
	I0415 11:17:19.734356  362872 main.go:141] libmachine: (addons-316289) DBG | Closing plugin on server side
	I0415 11:17:19.734380  362872 main.go:141] libmachine: Successfully made call to close driver server
	I0415 11:17:19.734476  362872 main.go:141] libmachine: Successfully made call to close driver server
	I0415 11:17:19.734490  362872 main.go:141] libmachine: (addons-316289) DBG | Closing plugin on server side
	I0415 11:17:19.734512  362872 main.go:141] libmachine: Successfully made call to close driver server
	I0415 11:17:19.734533  362872 main.go:141] libmachine: (addons-316289) DBG | Closing plugin on server side
	I0415 11:17:19.734555  362872 main.go:141] libmachine: Successfully made call to close driver server
	I0415 11:17:19.734934  362872 main.go:141] libmachine: (addons-316289) DBG | Closing plugin on server side
	I0415 11:17:19.734975  362872 main.go:141] libmachine: Successfully made call to close driver server
	I0415 11:17:19.737741  362872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 11:17:19.737752  362872 main.go:141] libmachine: Making call to close driver server
	I0415 11:17:19.737760  362872 main.go:141] libmachine: (addons-316289) Calling .Close
	I0415 11:17:19.738316  362872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 11:17:19.739865  362872 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-316289 service yakd-dashboard -n yakd-dashboard
	
	I0415 11:17:19.738738  362872 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0415 11:17:19.738754  362872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 11:17:19.738818  362872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 11:17:19.740071  362872 main.go:141] libmachine: Making call to close driver server
	I0415 11:17:19.738818  362872 main.go:141] libmachine: Successfully made call to close driver server
	I0415 11:17:19.738838  362872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 11:17:19.738843  362872 main.go:141] libmachine: (addons-316289) DBG | Closing plugin on server side
	I0415 11:17:19.738880  362872 main.go:141] libmachine: (addons-316289) DBG | Closing plugin on server side
	I0415 11:17:19.738971  362872 main.go:141] libmachine: Successfully made call to close driver server
	I0415 11:17:19.740100  362872 main.go:141] libmachine: (addons-316289) Calling .Close
	I0415 11:17:19.740219  362872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 11:17:19.740233  362872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 11:17:19.741930  362872 main.go:141] libmachine: Making call to close driver server
	I0415 11:17:19.740241  362872 main.go:141] libmachine: Making call to close driver server
	I0415 11:17:19.741969  362872 main.go:141] libmachine: (addons-316289) Calling .Close
	I0415 11:17:19.741943  362872 main.go:141] libmachine: (addons-316289) Calling .Close
	I0415 11:17:19.742195  362872 main.go:141] libmachine: Successfully made call to close driver server
	I0415 11:17:19.742215  362872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 11:17:19.742215  362872 main.go:141] libmachine: (addons-316289) DBG | Closing plugin on server side
	I0415 11:17:19.742226  362872 addons.go:470] Verifying addon ingress=true in "addons-316289"
	I0415 11:17:19.742246  362872 main.go:141] libmachine: Successfully made call to close driver server
	I0415 11:17:19.742254  362872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 11:17:19.743734  362872 out.go:177] * Verifying ingress addon...
	I0415 11:17:19.742311  362872 main.go:141] libmachine: (addons-316289) DBG | Closing plugin on server side
	I0415 11:17:19.742332  362872 main.go:141] libmachine: Successfully made call to close driver server
	I0415 11:17:19.745075  362872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 11:17:19.745791  362872 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0415 11:17:19.770075  362872 node_ready.go:49] node "addons-316289" has status "Ready":"True"
	I0415 11:17:19.770110  362872 node_ready.go:38] duration metric: took 37.987052ms for node "addons-316289" to be "Ready" ...
	I0415 11:17:19.770125  362872 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0415 11:17:19.806168  362872 main.go:141] libmachine: Making call to close driver server
	I0415 11:17:19.806195  362872 main.go:141] libmachine: (addons-316289) Calling .Close
	I0415 11:17:19.806197  362872 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0415 11:17:19.806221  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:19.806462  362872 main.go:141] libmachine: Successfully made call to close driver server
	I0415 11:17:19.806481  362872 main.go:141] libmachine: Making call to close connection to plugin binary
	W0415 11:17:19.806576  362872 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0415 11:17:19.806920  362872 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0415 11:17:19.806945  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:19.820308  362872 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-5gp9l" in "kube-system" namespace to be "Ready" ...
	I0415 11:17:19.831138  362872 main.go:141] libmachine: Making call to close driver server
	I0415 11:17:19.831160  362872 main.go:141] libmachine: (addons-316289) Calling .Close
	I0415 11:17:19.831545  362872 main.go:141] libmachine: Successfully made call to close driver server
	I0415 11:17:19.831576  362872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 11:17:19.870614  362872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0415 11:17:20.236105  362872 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-316289" context rescaled to 1 replicas
	I0415 11:17:20.255746  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:20.260298  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:20.762952  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:20.807457  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:20.814927  362872 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.320598687s)
	I0415 11:17:20.814944  362872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.782907921s)
	I0415 11:17:20.815006  362872 main.go:141] libmachine: Making call to close driver server
	I0415 11:17:20.817172  362872 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0415 11:17:20.815032  362872 main.go:141] libmachine: (addons-316289) Calling .Close
	I0415 11:17:20.820204  362872 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0415 11:17:20.819181  362872 main.go:141] libmachine: Successfully made call to close driver server
	I0415 11:17:20.819239  362872 main.go:141] libmachine: (addons-316289) DBG | Closing plugin on server side
	I0415 11:17:20.821887  362872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 11:17:20.821916  362872 main.go:141] libmachine: Making call to close driver server
	I0415 11:17:20.821926  362872 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0415 11:17:20.821933  362872 main.go:141] libmachine: (addons-316289) Calling .Close
	I0415 11:17:20.821940  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0415 11:17:20.822321  362872 main.go:141] libmachine: (addons-316289) DBG | Closing plugin on server side
	I0415 11:17:20.822339  362872 main.go:141] libmachine: Successfully made call to close driver server
	I0415 11:17:20.822354  362872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 11:17:20.822372  362872 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-316289"
	I0415 11:17:20.823732  362872 out.go:177] * Verifying csi-hostpath-driver addon...
	I0415 11:17:20.825721  362872 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0415 11:17:20.849530  362872 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0415 11:17:20.849560  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:20.956090  362872 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0415 11:17:20.956126  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0415 11:17:21.024143  362872 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0415 11:17:21.024178  362872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0415 11:17:21.137915  362872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0415 11:17:21.258633  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:21.258795  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:21.342153  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:21.745100  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:21.752512  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:21.828909  362872 pod_ready.go:102] pod "coredns-76f75df574-5gp9l" in "kube-system" namespace has status "Ready":"False"
	I0415 11:17:21.836566  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:21.949880  362872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.079195996s)
	I0415 11:17:21.949959  362872 main.go:141] libmachine: Making call to close driver server
	I0415 11:17:21.949977  362872 main.go:141] libmachine: (addons-316289) Calling .Close
	I0415 11:17:21.950412  362872 main.go:141] libmachine: Successfully made call to close driver server
	I0415 11:17:21.950453  362872 main.go:141] libmachine: (addons-316289) DBG | Closing plugin on server side
	I0415 11:17:21.950473  362872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 11:17:21.950484  362872 main.go:141] libmachine: Making call to close driver server
	I0415 11:17:21.950493  362872 main.go:141] libmachine: (addons-316289) Calling .Close
	I0415 11:17:21.950768  362872 main.go:141] libmachine: Successfully made call to close driver server
	I0415 11:17:21.950795  362872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 11:17:22.248390  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:22.256670  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:22.366295  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:22.513461  362872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.375487968s)
	I0415 11:17:22.513539  362872 main.go:141] libmachine: Making call to close driver server
	I0415 11:17:22.513556  362872 main.go:141] libmachine: (addons-316289) Calling .Close
	I0415 11:17:22.513919  362872 main.go:141] libmachine: Successfully made call to close driver server
	I0415 11:17:22.513940  362872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 11:17:22.513948  362872 main.go:141] libmachine: Making call to close driver server
	I0415 11:17:22.513957  362872 main.go:141] libmachine: (addons-316289) Calling .Close
	I0415 11:17:22.513970  362872 main.go:141] libmachine: (addons-316289) DBG | Closing plugin on server side
	I0415 11:17:22.514335  362872 main.go:141] libmachine: Successfully made call to close driver server
	I0415 11:17:22.514354  362872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 11:17:22.515922  362872 addons.go:470] Verifying addon gcp-auth=true in "addons-316289"
	I0415 11:17:22.518098  362872 out.go:177] * Verifying gcp-auth addon...
	I0415 11:17:22.520039  362872 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0415 11:17:22.555805  362872 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0415 11:17:22.555830  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:22.745197  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:22.758391  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:22.847674  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:23.026814  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:23.246086  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:23.249871  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:23.331798  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:23.524005  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:23.744935  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:23.750922  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:23.832956  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:24.024053  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:24.246351  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:24.250262  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:24.331540  362872 pod_ready.go:102] pod "coredns-76f75df574-5gp9l" in "kube-system" namespace has status "Ready":"False"
	I0415 11:17:24.337168  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:24.523784  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:24.747476  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:24.752076  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:24.833847  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:25.025050  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:25.245841  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:25.249552  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:25.331021  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:25.523929  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:25.766334  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:25.768203  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:25.831971  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:26.024621  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:26.245463  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:26.250143  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:26.332362  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:26.523351  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:26.752795  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:26.756454  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:26.827510  362872 pod_ready.go:102] pod "coredns-76f75df574-5gp9l" in "kube-system" namespace has status "Ready":"False"
	I0415 11:17:26.833828  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:27.024388  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:27.245116  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:27.250542  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:27.332166  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:27.524252  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:27.744701  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:27.750697  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:27.831762  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:28.024031  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:28.245408  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:28.251949  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:28.331023  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:28.523939  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:28.746000  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:28.750208  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:28.831328  362872 pod_ready.go:102] pod "coredns-76f75df574-5gp9l" in "kube-system" namespace has status "Ready":"False"
	I0415 11:17:28.833794  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:29.026410  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:29.251502  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:29.259715  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:29.334535  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:29.523702  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:29.745683  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:29.750008  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:29.831533  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:30.024853  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:30.245659  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:30.249652  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:30.331193  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:30.525032  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:30.745740  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:30.749804  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:30.832812  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:31.025940  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:31.244667  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:31.250384  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:31.327886  362872 pod_ready.go:102] pod "coredns-76f75df574-5gp9l" in "kube-system" namespace has status "Ready":"False"
	I0415 11:17:31.333682  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:31.524509  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:31.745172  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:31.750948  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:31.831306  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:32.024571  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:32.244904  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:32.250227  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:32.332758  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:32.523727  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:32.745714  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:32.750089  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:32.840587  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:33.024490  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:33.246252  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:33.250619  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:33.327944  362872 pod_ready.go:102] pod "coredns-76f75df574-5gp9l" in "kube-system" namespace has status "Ready":"False"
	I0415 11:17:33.335157  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:33.524065  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:33.746696  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:33.750494  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:33.830593  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:34.024404  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:34.246921  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:34.252519  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:34.335751  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:34.529661  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:34.746921  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:34.751039  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:34.831628  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:35.023654  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:35.245580  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:35.251489  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:35.328573  362872 pod_ready.go:102] pod "coredns-76f75df574-5gp9l" in "kube-system" namespace has status "Ready":"False"
	I0415 11:17:35.331666  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:35.525657  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:35.745662  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:35.751316  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:35.832534  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:36.024475  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:36.247267  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:36.250860  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:36.333342  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:36.689172  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:36.746010  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:36.754652  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:36.833427  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:37.024522  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:37.245525  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:37.250197  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:37.331397  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:37.524476  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:37.750003  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:37.752321  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:37.829908  362872 pod_ready.go:102] pod "coredns-76f75df574-5gp9l" in "kube-system" namespace has status "Ready":"False"
	I0415 11:17:37.838374  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:38.025601  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:38.245692  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:38.250018  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:38.332257  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:38.524721  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:38.745915  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:38.750107  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:38.831250  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:39.024157  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:39.246090  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:39.251188  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:39.358432  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:39.524149  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:39.744694  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:39.750980  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:39.833628  362872 pod_ready.go:102] pod "coredns-76f75df574-5gp9l" in "kube-system" namespace has status "Ready":"False"
	I0415 11:17:39.836693  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:40.024113  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:40.244620  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:40.250140  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:40.332073  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:40.523736  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:40.745280  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:40.752282  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:40.835443  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:41.024428  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:41.244978  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:41.252327  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:41.330650  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:41.523738  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:41.745483  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:41.751376  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:41.831976  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:42.024609  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:42.245225  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:42.250730  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:42.332052  362872 pod_ready.go:102] pod "coredns-76f75df574-5gp9l" in "kube-system" namespace has status "Ready":"False"
	I0415 11:17:42.337209  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:42.524311  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:42.757108  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:42.759170  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:42.837478  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:43.024591  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:43.251381  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:43.254015  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:43.334562  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:43.525973  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:43.745751  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:43.752410  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:43.831166  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:44.024380  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:44.245729  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:44.251328  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:44.332764  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:44.529472  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:44.746614  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:44.749913  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:44.830998  362872 pod_ready.go:102] pod "coredns-76f75df574-5gp9l" in "kube-system" namespace has status "Ready":"False"
	I0415 11:17:44.833663  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:45.024982  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:45.247398  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:45.252385  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:45.336279  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:45.524897  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:45.745536  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:45.752069  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:45.834910  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:46.024890  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:46.245623  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:46.253356  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:46.333065  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:46.524635  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:46.746627  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:46.753882  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:46.833224  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:47.025424  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:47.245740  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:47.250154  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:47.329436  362872 pod_ready.go:102] pod "coredns-76f75df574-5gp9l" in "kube-system" namespace has status "Ready":"False"
	I0415 11:17:47.332812  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:47.525443  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:47.746066  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:47.750684  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:47.831422  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:48.026136  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:48.245548  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:48.250280  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:48.333633  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:48.524682  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:48.745661  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:48.750643  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:48.835977  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:49.025353  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:49.246461  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 11:17:49.249890  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:49.329809  362872 pod_ready.go:102] pod "coredns-76f75df574-5gp9l" in "kube-system" namespace has status "Ready":"False"
	I0415 11:17:49.332137  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:49.524047  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:49.745891  362872 kapi.go:107] duration metric: took 30.007146638s to wait for kubernetes.io/minikube-addons=registry ...
	I0415 11:17:49.750205  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:49.832368  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:50.024294  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:50.253585  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:50.333173  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:50.852066  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:50.852764  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:50.858220  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:50.880039  362872 pod_ready.go:92] pod "coredns-76f75df574-5gp9l" in "kube-system" namespace has status "Ready":"True"
	I0415 11:17:50.880070  362872 pod_ready.go:81] duration metric: took 31.059725144s for pod "coredns-76f75df574-5gp9l" in "kube-system" namespace to be "Ready" ...
	I0415 11:17:50.880082  362872 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-sbx9d" in "kube-system" namespace to be "Ready" ...
	I0415 11:17:50.898952  362872 pod_ready.go:97] error getting pod "coredns-76f75df574-sbx9d" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-sbx9d" not found
	I0415 11:17:50.898995  362872 pod_ready.go:81] duration metric: took 18.906422ms for pod "coredns-76f75df574-sbx9d" in "kube-system" namespace to be "Ready" ...
	E0415 11:17:50.899008  362872 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-sbx9d" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-sbx9d" not found
	I0415 11:17:50.899015  362872 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-316289" in "kube-system" namespace to be "Ready" ...
	I0415 11:17:50.910378  362872 pod_ready.go:92] pod "etcd-addons-316289" in "kube-system" namespace has status "Ready":"True"
	I0415 11:17:50.910401  362872 pod_ready.go:81] duration metric: took 11.380889ms for pod "etcd-addons-316289" in "kube-system" namespace to be "Ready" ...
	I0415 11:17:50.910413  362872 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-316289" in "kube-system" namespace to be "Ready" ...
	I0415 11:17:50.933613  362872 pod_ready.go:92] pod "kube-apiserver-addons-316289" in "kube-system" namespace has status "Ready":"True"
	I0415 11:17:50.933643  362872 pod_ready.go:81] duration metric: took 23.223388ms for pod "kube-apiserver-addons-316289" in "kube-system" namespace to be "Ready" ...
	I0415 11:17:50.933662  362872 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-316289" in "kube-system" namespace to be "Ready" ...
	I0415 11:17:50.947145  362872 pod_ready.go:92] pod "kube-controller-manager-addons-316289" in "kube-system" namespace has status "Ready":"True"
	I0415 11:17:50.947173  362872 pod_ready.go:81] duration metric: took 13.504777ms for pod "kube-controller-manager-addons-316289" in "kube-system" namespace to be "Ready" ...
	I0415 11:17:50.947185  362872 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jxscs" in "kube-system" namespace to be "Ready" ...
	I0415 11:17:51.026274  362872 pod_ready.go:92] pod "kube-proxy-jxscs" in "kube-system" namespace has status "Ready":"True"
	I0415 11:17:51.026303  362872 pod_ready.go:81] duration metric: took 79.111589ms for pod "kube-proxy-jxscs" in "kube-system" namespace to be "Ready" ...
	I0415 11:17:51.026313  362872 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-316289" in "kube-system" namespace to be "Ready" ...
	I0415 11:17:51.028463  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:51.252973  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:51.331326  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:51.427529  362872 pod_ready.go:92] pod "kube-scheduler-addons-316289" in "kube-system" namespace has status "Ready":"True"
	I0415 11:17:51.427556  362872 pod_ready.go:81] duration metric: took 401.235978ms for pod "kube-scheduler-addons-316289" in "kube-system" namespace to be "Ready" ...
	I0415 11:17:51.427568  362872 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-75d6c48ddd-gkzzk" in "kube-system" namespace to be "Ready" ...
	I0415 11:17:51.525310  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:51.750262  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:51.831936  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:52.025970  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:52.251187  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:52.332166  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:52.524083  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:52.750639  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:52.835826  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:53.027095  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:53.251221  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:53.334136  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:53.435233  362872 pod_ready.go:102] pod "metrics-server-75d6c48ddd-gkzzk" in "kube-system" namespace has status "Ready":"False"
	I0415 11:17:53.524858  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:53.751167  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:53.835847  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:53.934544  362872 pod_ready.go:92] pod "metrics-server-75d6c48ddd-gkzzk" in "kube-system" namespace has status "Ready":"True"
	I0415 11:17:53.934572  362872 pod_ready.go:81] duration metric: took 2.506996493s for pod "metrics-server-75d6c48ddd-gkzzk" in "kube-system" namespace to be "Ready" ...
	I0415 11:17:53.934582  362872 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-shr4w" in "kube-system" namespace to be "Ready" ...
	I0415 11:17:54.164612  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:54.225787  362872 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-shr4w" in "kube-system" namespace has status "Ready":"True"
	I0415 11:17:54.225815  362872 pod_ready.go:81] duration metric: took 291.2263ms for pod "nvidia-device-plugin-daemonset-shr4w" in "kube-system" namespace to be "Ready" ...
	I0415 11:17:54.225835  362872 pod_ready.go:38] duration metric: took 34.455695372s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0415 11:17:54.225855  362872 api_server.go:52] waiting for apiserver process to appear ...
	I0415 11:17:54.225943  362872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 11:17:54.251078  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:54.256009  362872 api_server.go:72] duration metric: took 44.238778527s to wait for apiserver process to appear ...
	I0415 11:17:54.256040  362872 api_server.go:88] waiting for apiserver healthz status ...
	I0415 11:17:54.256094  362872 api_server.go:253] Checking apiserver healthz at https://192.168.39.62:8443/healthz ...
	I0415 11:17:54.260334  362872 api_server.go:279] https://192.168.39.62:8443/healthz returned 200:
	ok
	I0415 11:17:54.261541  362872 api_server.go:141] control plane version: v1.29.3
	I0415 11:17:54.261567  362872 api_server.go:131] duration metric: took 5.521827ms to wait for apiserver health ...
	I0415 11:17:54.261576  362872 system_pods.go:43] waiting for kube-system pods to appear ...
	I0415 11:17:54.333889  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:54.431180  362872 system_pods.go:59] 18 kube-system pods found
	I0415 11:17:54.431236  362872 system_pods.go:61] "coredns-76f75df574-5gp9l" [70d1d8b9-1434-4d5a-9966-0aafe40b9545] Running
	I0415 11:17:54.431244  362872 system_pods.go:61] "csi-hostpath-attacher-0" [cedcfe8a-2cbd-4cd8-9f54-03a88286d398] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0415 11:17:54.431252  362872 system_pods.go:61] "csi-hostpath-resizer-0" [0be46e16-43e7-4f18-bae6-12e8422cde73] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0415 11:17:54.431259  362872 system_pods.go:61] "csi-hostpathplugin-qb59q" [126673fb-4eaf-4026-8f12-269448384fa3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0415 11:17:54.431264  362872 system_pods.go:61] "etcd-addons-316289" [69cb7929-d793-4312-8775-527ba5506814] Running
	I0415 11:17:54.431268  362872 system_pods.go:61] "kube-apiserver-addons-316289" [ca663033-d088-4a98-9e0c-a682410f4598] Running
	I0415 11:17:54.431271  362872 system_pods.go:61] "kube-controller-manager-addons-316289" [39b3812c-8590-4aff-b9cb-c88f0e976021] Running
	I0415 11:17:54.431277  362872 system_pods.go:61] "kube-ingress-dns-minikube" [e340ce7f-98e0-40af-aa75-51341a7257aa] Running
	I0415 11:17:54.431281  362872 system_pods.go:61] "kube-proxy-jxscs" [b1d5ab09-c351-4291-94cb-f29b18d8ae78] Running
	I0415 11:17:54.431287  362872 system_pods.go:61] "kube-scheduler-addons-316289" [8eda30a2-247c-4ff3-9c58-3fdc23fd4639] Running
	I0415 11:17:54.431290  362872 system_pods.go:61] "metrics-server-75d6c48ddd-gkzzk" [90826a2e-0cae-4a73-9caf-e74d8d966b44] Running
	I0415 11:17:54.431293  362872 system_pods.go:61] "nvidia-device-plugin-daemonset-shr4w" [3be294d1-7baf-42a9-984d-a773ddcee738] Running
	I0415 11:17:54.431296  362872 system_pods.go:61] "registry-m6lkv" [e6935040-f766-47e5-bd50-a5be1079e707] Running
	I0415 11:17:54.431299  362872 system_pods.go:61] "registry-proxy-vjpvs" [5f2cf2a0-2836-4ea8-8409-f3af4a3baac7] Running
	I0415 11:17:54.431305  362872 system_pods.go:61] "snapshot-controller-58dbcc7b99-l8j96" [ff95ee05-6125-4749-acb6-b02bb80713ab] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0415 11:17:54.431311  362872 system_pods.go:61] "snapshot-controller-58dbcc7b99-xzvr7" [4d6b0dd8-3404-4598-9357-5f0cc39686c8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0415 11:17:54.431319  362872 system_pods.go:61] "storage-provisioner" [c6504ef2-8181-46bd-942a-d6ce0d0301f9] Running
	I0415 11:17:54.431324  362872 system_pods.go:61] "tiller-deploy-7b677967b9-qltlq" [54865dad-a845-41ed-97ae-b6f5ef0ba018] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0415 11:17:54.431334  362872 system_pods.go:74] duration metric: took 169.752299ms to wait for pod list to return data ...
	I0415 11:17:54.431349  362872 default_sa.go:34] waiting for default service account to be created ...
	I0415 11:17:54.524846  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:54.624596  362872 default_sa.go:45] found service account: "default"
	I0415 11:17:54.624627  362872 default_sa.go:55] duration metric: took 193.271569ms for default service account to be created ...
	I0415 11:17:54.624637  362872 system_pods.go:116] waiting for k8s-apps to be running ...
	I0415 11:17:54.751254  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:54.832446  362872 system_pods.go:86] 18 kube-system pods found
	I0415 11:17:54.832479  362872 system_pods.go:89] "coredns-76f75df574-5gp9l" [70d1d8b9-1434-4d5a-9966-0aafe40b9545] Running
	I0415 11:17:54.832489  362872 system_pods.go:89] "csi-hostpath-attacher-0" [cedcfe8a-2cbd-4cd8-9f54-03a88286d398] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0415 11:17:54.832496  362872 system_pods.go:89] "csi-hostpath-resizer-0" [0be46e16-43e7-4f18-bae6-12e8422cde73] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0415 11:17:54.832507  362872 system_pods.go:89] "csi-hostpathplugin-qb59q" [126673fb-4eaf-4026-8f12-269448384fa3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0415 11:17:54.832513  362872 system_pods.go:89] "etcd-addons-316289" [69cb7929-d793-4312-8775-527ba5506814] Running
	I0415 11:17:54.832520  362872 system_pods.go:89] "kube-apiserver-addons-316289" [ca663033-d088-4a98-9e0c-a682410f4598] Running
	I0415 11:17:54.832527  362872 system_pods.go:89] "kube-controller-manager-addons-316289" [39b3812c-8590-4aff-b9cb-c88f0e976021] Running
	I0415 11:17:54.832533  362872 system_pods.go:89] "kube-ingress-dns-minikube" [e340ce7f-98e0-40af-aa75-51341a7257aa] Running
	I0415 11:17:54.832538  362872 system_pods.go:89] "kube-proxy-jxscs" [b1d5ab09-c351-4291-94cb-f29b18d8ae78] Running
	I0415 11:17:54.832548  362872 system_pods.go:89] "kube-scheduler-addons-316289" [8eda30a2-247c-4ff3-9c58-3fdc23fd4639] Running
	I0415 11:17:54.832558  362872 system_pods.go:89] "metrics-server-75d6c48ddd-gkzzk" [90826a2e-0cae-4a73-9caf-e74d8d966b44] Running
	I0415 11:17:54.832577  362872 system_pods.go:89] "nvidia-device-plugin-daemonset-shr4w" [3be294d1-7baf-42a9-984d-a773ddcee738] Running
	I0415 11:17:54.832581  362872 system_pods.go:89] "registry-m6lkv" [e6935040-f766-47e5-bd50-a5be1079e707] Running
	I0415 11:17:54.832585  362872 system_pods.go:89] "registry-proxy-vjpvs" [5f2cf2a0-2836-4ea8-8409-f3af4a3baac7] Running
	I0415 11:17:54.832591  362872 system_pods.go:89] "snapshot-controller-58dbcc7b99-l8j96" [ff95ee05-6125-4749-acb6-b02bb80713ab] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0415 11:17:54.832600  362872 system_pods.go:89] "snapshot-controller-58dbcc7b99-xzvr7" [4d6b0dd8-3404-4598-9357-5f0cc39686c8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0415 11:17:54.832605  362872 system_pods.go:89] "storage-provisioner" [c6504ef2-8181-46bd-942a-d6ce0d0301f9] Running
	I0415 11:17:54.832612  362872 system_pods.go:89] "tiller-deploy-7b677967b9-qltlq" [54865dad-a845-41ed-97ae-b6f5ef0ba018] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0415 11:17:54.832619  362872 system_pods.go:126] duration metric: took 207.97761ms to wait for k8s-apps to be running ...
	I0415 11:17:54.832628  362872 system_svc.go:44] waiting for kubelet service to be running ....
	I0415 11:17:54.832688  362872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 11:17:54.837808  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:54.858233  362872 system_svc.go:56] duration metric: took 25.595338ms WaitForService to wait for kubelet
	I0415 11:17:54.858262  362872 kubeadm.go:576] duration metric: took 44.841036424s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 11:17:54.858286  362872 node_conditions.go:102] verifying NodePressure condition ...
	I0415 11:17:55.024534  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:55.027569  362872 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0415 11:17:55.027607  362872 node_conditions.go:123] node cpu capacity is 2
	I0415 11:17:55.027623  362872 node_conditions.go:105] duration metric: took 169.331887ms to run NodePressure ...
	I0415 11:17:55.027639  362872 start.go:240] waiting for startup goroutines ...
	I0415 11:17:55.252842  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:55.332378  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:55.524243  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:55.750512  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:55.834579  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:56.024698  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:56.251902  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:56.347029  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:56.523852  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:56.751922  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:56.831903  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:57.023966  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:57.255385  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:57.333140  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:57.524463  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:57.750908  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:57.832299  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:58.119884  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:58.250806  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:58.332423  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:58.524067  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:58.750705  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:58.836471  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:59.026270  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:59.252060  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:59.332312  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:17:59.525584  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:17:59.750920  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:17:59.831851  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:00.023896  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:00.250419  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:00.331499  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:00.524651  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:00.751541  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:00.831551  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:01.025189  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:01.250960  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:01.334479  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:01.524044  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:01.751549  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:01.831446  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:02.025916  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:02.250335  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:02.332450  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:02.524570  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:02.751309  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:02.832316  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:03.024757  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:03.251361  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:03.332076  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:03.524668  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:03.753008  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:03.832914  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:04.024042  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:04.250597  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:04.331880  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:04.525341  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:04.750921  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:04.836323  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:05.024548  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:05.252145  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:05.332168  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:05.524331  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:05.751068  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:05.835906  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:06.024762  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:06.251113  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:06.334787  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:06.524606  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:06.751086  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:06.832128  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:07.024023  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:07.251263  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:07.332584  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:07.525086  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:07.750272  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:07.834594  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:08.025980  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:08.262406  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:08.333356  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:08.524177  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:08.750731  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:08.836151  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:09.024264  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:09.258843  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:09.331275  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:09.530156  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:09.750667  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:09.833727  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:10.025166  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:10.251912  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:10.559985  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:10.560274  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:10.751609  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:10.837443  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:11.024847  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:11.251580  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:11.348380  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:11.679544  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:11.750823  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:11.831935  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:12.024187  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:12.250486  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:12.332851  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:12.524257  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:12.750879  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:12.836176  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:13.024206  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:13.251490  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:13.337345  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:13.524638  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:13.765303  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:13.833235  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:14.023948  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:14.250065  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:14.332492  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:14.523499  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:14.752933  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:14.841831  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:15.031077  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:15.250926  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:15.335322  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:15.524671  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:15.770391  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:15.848817  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:16.025108  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:16.253709  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:16.340368  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:16.524338  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:16.754049  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:16.853428  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:17.024621  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:17.251992  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:17.333856  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:17.524737  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:17.759772  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:17.832041  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:18.024473  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:18.251635  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:18.332589  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:18.523891  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:18.752040  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:18.837511  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:19.025400  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:19.251281  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:19.331406  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:19.524406  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:19.751258  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:19.833994  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:20.029982  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:20.253235  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:20.341462  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:20.524647  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:20.751743  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:20.832218  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:21.024478  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:21.250969  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:21.333010  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:21.524554  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:21.752321  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:21.831756  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:22.025333  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:22.252556  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:22.333093  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:22.524714  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:22.753084  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:22.835398  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:23.024137  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:23.252955  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:23.334453  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:23.524183  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:23.752297  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:23.836561  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:24.023849  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:24.253504  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:24.331512  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:24.525136  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:24.751248  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:24.833731  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 11:18:25.026124  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:25.568944  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:25.569996  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:25.571771  362872 kapi.go:107] duration metric: took 1m4.746045793s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0415 11:18:25.751066  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:26.025163  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:26.250097  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:26.523992  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:26.751224  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:27.024868  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:27.251714  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:27.524303  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:27.751513  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:28.024602  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:28.251464  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:28.525307  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:29.039206  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:29.040296  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:29.253509  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:29.523676  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:29.751776  362872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 11:18:30.023939  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:30.250675  362872 kapi.go:107] duration metric: took 1m10.504879731s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0415 11:18:30.524815  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:31.025606  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:31.524680  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:32.024549  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:32.524531  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:33.030181  362872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 11:18:33.525004  362872 kapi.go:107] duration metric: took 1m11.004962489s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0415 11:18:33.526835  362872 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-316289 cluster.
	I0415 11:18:33.528138  362872 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0415 11:18:33.529255  362872 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0415 11:18:33.530417  362872 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, metrics-server, cloud-spanner, nvidia-device-plugin, yakd, helm-tiller, inspektor-gadget, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0415 11:18:33.531705  362872 addons.go:505] duration metric: took 1m23.514447565s for enable addons: enabled=[storage-provisioner ingress-dns metrics-server cloud-spanner nvidia-device-plugin yakd helm-tiller inspektor-gadget storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0415 11:18:33.531750  362872 start.go:245] waiting for cluster config update ...
	I0415 11:18:33.531777  362872 start.go:254] writing updated cluster config ...
	I0415 11:18:33.532042  362872 ssh_runner.go:195] Run: rm -f paused
	I0415 11:18:33.587759  362872 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0415 11:18:33.589722  362872 out.go:177] * Done! kubectl is now configured to use "addons-316289" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	0be3d8281b00a       dd1b12fcb6097       11 seconds ago       Running             hello-world-app                          0                   6f8ae182b7e6e       hello-world-app-5d77478584-p2kpw
	b86ae1026909f       7373e995f4086       15 seconds ago       Running             headlamp                                 0                   18920d6d5a112       headlamp-5b77dbd7c4-tpqsd
	91cc80433603a       e289a478ace02       24 seconds ago       Running             nginx                                    0                   f7166c04f3724       nginx
	71b5f5acc7e44       db2fc13d44d50       54 seconds ago       Running             gcp-auth                                 0                   89adfaa11a925       gcp-auth-7d69788767-9cldb
	b3d292c00e498       738351fd438f0       About a minute ago   Running             csi-snapshotter                          0                   6f7cb14dc3785       csi-hostpathplugin-qb59q
	048d53ff71194       931dbfd16f87c       About a minute ago   Running             csi-provisioner                          0                   6f7cb14dc3785       csi-hostpathplugin-qb59q
	22ebf5d2a3726       e899260153aed       About a minute ago   Running             liveness-probe                           0                   6f7cb14dc3785       csi-hostpathplugin-qb59q
	d83c73de75ac6       e255e073c508c       About a minute ago   Running             hostpath                                 0                   6f7cb14dc3785       csi-hostpathplugin-qb59q
	e4b87d98093e5       88ef14a257f42       About a minute ago   Running             node-driver-registrar                    0                   6f7cb14dc3785       csi-hostpathplugin-qb59q
	67f3c1b2d8d31       59cbb42146a37       About a minute ago   Running             csi-attacher                             0                   8dee0511b56cd       csi-hostpath-attacher-0
	1f20af26507c1       19a639eda60f0       About a minute ago   Running             csi-resizer                              0                   848813b6f83bc       csi-hostpath-resizer-0
	f2a84dfde4e23       a1ed5895ba635       About a minute ago   Running             csi-external-health-monitor-controller   0                   6f7cb14dc3785       csi-hostpathplugin-qb59q
	ee9267f83f72e       b29d748098e32       About a minute ago   Exited              patch                                    1                   4ff13220f338e       ingress-nginx-admission-patch-qcjkd
	3160605b81fb6       b29d748098e32       About a minute ago   Exited              create                                   0                   7f6977444e4dd       ingress-nginx-admission-create-qxrpt
	e790f664fad43       31de47c733c91       About a minute ago   Running             yakd                                     0                   5caf558a26f0e       yakd-dashboard-9947fc6bf-89vjq
	cdb092eabe479       6e38f40d628db       2 minutes ago        Running             storage-provisioner                      0                   662d756e8a33c       storage-provisioner
	03e840b0b514d       cbb01a7bd410d       2 minutes ago        Running             coredns                                  0                   2ece774b6008b       coredns-76f75df574-5gp9l
	9d298ffd6ee8a       a1d263b5dc5b0       2 minutes ago        Running             kube-proxy                               0                   9c55b96cffdd8       kube-proxy-jxscs
	07942458e6695       3861cfcd7c04c       2 minutes ago        Running             etcd                                     0                   b18954f691ea9       etcd-addons-316289
	cdafb6ce26200       39f995c9f1996       2 minutes ago        Running             kube-apiserver                           0                   e1221954df488       kube-apiserver-addons-316289
	1a3ceb1d5cb96       8c390d98f50c0       2 minutes ago        Running             kube-scheduler                           0                   55df615e48b57       kube-scheduler-addons-316289
	f39cc7bdef68c       6052a25da3f97       2 minutes ago        Running             kube-controller-manager                  0                   f5760c96f9872       kube-controller-manager-addons-316289
	
	
	==> containerd <==
	Apr 15 11:19:26 addons-316289 containerd[652]: time="2024-04-15T11:19:26.043374959Z" level=info msg="shim disconnected" id=14f43eb6b0858134fbaa6dd7d8261fdd40511c79da8f988e2d2760cb9cc27ed4 namespace=k8s.io
	Apr 15 11:19:26 addons-316289 containerd[652]: time="2024-04-15T11:19:26.043459684Z" level=warning msg="cleaning up after shim disconnected" id=14f43eb6b0858134fbaa6dd7d8261fdd40511c79da8f988e2d2760cb9cc27ed4 namespace=k8s.io
	Apr 15 11:19:26 addons-316289 containerd[652]: time="2024-04-15T11:19:26.043479117Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Apr 15 11:19:26 addons-316289 containerd[652]: time="2024-04-15T11:19:26.087510429Z" level=info msg="StopContainer for \"14f43eb6b0858134fbaa6dd7d8261fdd40511c79da8f988e2d2760cb9cc27ed4\" returns successfully"
	Apr 15 11:19:26 addons-316289 containerd[652]: time="2024-04-15T11:19:26.088795362Z" level=info msg="StopPodSandbox for \"d58d93ceb92edd859b5e130d6219d561de5b1c03fc01afff821ccb538c39126e\""
	Apr 15 11:19:26 addons-316289 containerd[652]: time="2024-04-15T11:19:26.088883516Z" level=info msg="Container to stop \"14f43eb6b0858134fbaa6dd7d8261fdd40511c79da8f988e2d2760cb9cc27ed4\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Apr 15 11:19:26 addons-316289 containerd[652]: time="2024-04-15T11:19:26.090302251Z" level=info msg="StopContainer for \"d37a7ef2f317c6c29b7fe7ca638f952abd6771d6cd824fbf63711f740f28f5db\" returns successfully"
	Apr 15 11:19:26 addons-316289 containerd[652]: time="2024-04-15T11:19:26.095010140Z" level=info msg="StopPodSandbox for \"1ac5f001a155f8320857e747f92fc39a7ec712a68cff258125b1876e3cfc0c40\""
	Apr 15 11:19:26 addons-316289 containerd[652]: time="2024-04-15T11:19:26.095724194Z" level=info msg="Container to stop \"d37a7ef2f317c6c29b7fe7ca638f952abd6771d6cd824fbf63711f740f28f5db\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Apr 15 11:19:26 addons-316289 containerd[652]: time="2024-04-15T11:19:26.153880568Z" level=info msg="shim disconnected" id=d58d93ceb92edd859b5e130d6219d561de5b1c03fc01afff821ccb538c39126e namespace=k8s.io
	Apr 15 11:19:26 addons-316289 containerd[652]: time="2024-04-15T11:19:26.153955272Z" level=warning msg="cleaning up after shim disconnected" id=d58d93ceb92edd859b5e130d6219d561de5b1c03fc01afff821ccb538c39126e namespace=k8s.io
	Apr 15 11:19:26 addons-316289 containerd[652]: time="2024-04-15T11:19:26.153968750Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Apr 15 11:19:26 addons-316289 containerd[652]: time="2024-04-15T11:19:26.156835807Z" level=info msg="shim disconnected" id=1ac5f001a155f8320857e747f92fc39a7ec712a68cff258125b1876e3cfc0c40 namespace=k8s.io
	Apr 15 11:19:26 addons-316289 containerd[652]: time="2024-04-15T11:19:26.157076003Z" level=warning msg="cleaning up after shim disconnected" id=1ac5f001a155f8320857e747f92fc39a7ec712a68cff258125b1876e3cfc0c40 namespace=k8s.io
	Apr 15 11:19:26 addons-316289 containerd[652]: time="2024-04-15T11:19:26.157227256Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Apr 15 11:19:26 addons-316289 containerd[652]: time="2024-04-15T11:19:26.269880898Z" level=info msg="TearDown network for sandbox \"1ac5f001a155f8320857e747f92fc39a7ec712a68cff258125b1876e3cfc0c40\" successfully"
	Apr 15 11:19:26 addons-316289 containerd[652]: time="2024-04-15T11:19:26.270010969Z" level=info msg="StopPodSandbox for \"1ac5f001a155f8320857e747f92fc39a7ec712a68cff258125b1876e3cfc0c40\" returns successfully"
	Apr 15 11:19:26 addons-316289 containerd[652]: time="2024-04-15T11:19:26.291000747Z" level=info msg="TearDown network for sandbox \"d58d93ceb92edd859b5e130d6219d561de5b1c03fc01afff821ccb538c39126e\" successfully"
	Apr 15 11:19:26 addons-316289 containerd[652]: time="2024-04-15T11:19:26.291055604Z" level=info msg="StopPodSandbox for \"d58d93ceb92edd859b5e130d6219d561de5b1c03fc01afff821ccb538c39126e\" returns successfully"
	Apr 15 11:19:26 addons-316289 containerd[652]: time="2024-04-15T11:19:26.512373131Z" level=info msg="RemoveContainer for \"14f43eb6b0858134fbaa6dd7d8261fdd40511c79da8f988e2d2760cb9cc27ed4\""
	Apr 15 11:19:26 addons-316289 containerd[652]: time="2024-04-15T11:19:26.524233247Z" level=info msg="RemoveContainer for \"14f43eb6b0858134fbaa6dd7d8261fdd40511c79da8f988e2d2760cb9cc27ed4\" returns successfully"
	Apr 15 11:19:26 addons-316289 containerd[652]: time="2024-04-15T11:19:26.525086913Z" level=error msg="ContainerStatus for \"14f43eb6b0858134fbaa6dd7d8261fdd40511c79da8f988e2d2760cb9cc27ed4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"14f43eb6b0858134fbaa6dd7d8261fdd40511c79da8f988e2d2760cb9cc27ed4\": not found"
	Apr 15 11:19:26 addons-316289 containerd[652]: time="2024-04-15T11:19:26.528223561Z" level=info msg="RemoveContainer for \"d37a7ef2f317c6c29b7fe7ca638f952abd6771d6cd824fbf63711f740f28f5db\""
	Apr 15 11:19:26 addons-316289 containerd[652]: time="2024-04-15T11:19:26.536156309Z" level=info msg="RemoveContainer for \"d37a7ef2f317c6c29b7fe7ca638f952abd6771d6cd824fbf63711f740f28f5db\" returns successfully"
	Apr 15 11:19:26 addons-316289 containerd[652]: time="2024-04-15T11:19:26.554503342Z" level=error msg="ContainerStatus for \"d37a7ef2f317c6c29b7fe7ca638f952abd6771d6cd824fbf63711f740f28f5db\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d37a7ef2f317c6c29b7fe7ca638f952abd6771d6cd824fbf63711f740f28f5db\": not found"
	
	
	==> coredns [03e840b0b514d80222cb65d0163daeea33aeda87aeb65322b9eedcd42c9828fd] <==
	[INFO] 10.244.0.21:45546 - 10330 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000094825s
	[INFO] 10.244.0.21:45546 - 63216 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000093233s
	[INFO] 10.244.0.21:45546 - 5842 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000080356s
	[INFO] 10.244.0.21:55451 - 63520 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000053672s
	[INFO] 10.244.0.21:45546 - 53889 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000340994s
	[INFO] 10.244.0.21:55451 - 40202 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000038486s
	[INFO] 10.244.0.21:55451 - 3192 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000033785s
	[INFO] 10.244.0.21:55451 - 26179 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000034168s
	[INFO] 10.244.0.21:55451 - 60762 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000077949s
	[INFO] 10.244.0.21:55451 - 46378 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000069445s
	[INFO] 10.244.0.21:55451 - 39476 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000074522s
	[INFO] 10.244.0.21:39357 - 21571 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000084558s
	[INFO] 10.244.0.21:39357 - 23462 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000099274s
	[INFO] 10.244.0.21:39357 - 53375 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000061239s
	[INFO] 10.244.0.21:39357 - 40120 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000195391s
	[INFO] 10.244.0.21:39357 - 23937 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000050971s
	[INFO] 10.244.0.21:39357 - 40064 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000194727s
	[INFO] 10.244.0.21:36701 - 36143 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000118419s
	[INFO] 10.244.0.21:39357 - 282 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000216568s
	[INFO] 10.244.0.21:36701 - 41528 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000190523s
	[INFO] 10.244.0.21:36701 - 1192 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000076566s
	[INFO] 10.244.0.21:36701 - 45125 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000068142s
	[INFO] 10.244.0.21:36701 - 41119 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000216819s
	[INFO] 10.244.0.21:36701 - 40571 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000073394s
	[INFO] 10.244.0.21:36701 - 23978 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000044054s
	
	
	==> describe nodes <==
	Name:               addons-316289
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-316289
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd91a0c5dbcf69c10661a6c45f66c039ce7b5f02
	                    minikube.k8s.io/name=addons-316289
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_15T11_16_56_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-316289
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-316289"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 11:16:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-316289
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Apr 2024 11:19:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Apr 2024 11:18:59 +0000   Mon, 15 Apr 2024 11:16:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Apr 2024 11:18:59 +0000   Mon, 15 Apr 2024 11:16:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Apr 2024 11:18:59 +0000   Mon, 15 Apr 2024 11:16:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Apr 2024 11:18:59 +0000   Mon, 15 Apr 2024 11:16:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.62
	  Hostname:    addons-316289
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f6048b8fb5f4fbaa5ebe8c2845b9ff0
	  System UUID:                6f6048b8-fb5f-4fba-a5eb-e8c2845b9ff0
	  Boot ID:                    bf3e8ace-b818-42ba-9f8d-f3c4e20965c1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.15
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-p2kpw         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29s
	  gcp-auth                    gcp-auth-7d69788767-9cldb                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m5s
	  headlamp                    headlamp-5b77dbd7c4-tpqsd                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22s
	  kube-system                 coredns-76f75df574-5gp9l                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m17s
	  kube-system                 csi-hostpath-attacher-0                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m7s
	  kube-system                 csi-hostpath-resizer-0                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m7s
	  kube-system                 csi-hostpathplugin-qb59q                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m7s
	  kube-system                 etcd-addons-316289                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m31s
	  kube-system                 kube-apiserver-addons-316289             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  kube-system                 kube-controller-manager-addons-316289    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  kube-system                 kube-proxy-jxscs                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m18s
	  kube-system                 kube-scheduler-addons-316289             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m11s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-89vjq           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     2m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 2m16s  kube-proxy       
	  Normal  Starting                 2m31s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m31s  kubelet          Node addons-316289 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m31s  kubelet          Node addons-316289 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m31s  kubelet          Node addons-316289 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m31s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m31s  kubelet          Node addons-316289 status is now: NodeReady
	  Normal  RegisteredNode           2m18s  node-controller  Node addons-316289 event: Registered Node addons-316289 in Controller
	
	
	==> dmesg <==
	[  +4.654623] systemd-fstab-generator[863]: Ignoring "noauto" option for root device
	[  +0.062597] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.731635] systemd-fstab-generator[1237]: Ignoring "noauto" option for root device
	[  +0.074955] kauditd_printk_skb: 69 callbacks suppressed
	[Apr15 11:17] systemd-fstab-generator[1439]: Ignoring "noauto" option for root device
	[  +0.165661] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.266096] kauditd_printk_skb: 111 callbacks suppressed
	[  +5.198740] kauditd_printk_skb: 125 callbacks suppressed
	[  +5.025824] kauditd_printk_skb: 89 callbacks suppressed
	[  +9.847160] kauditd_printk_skb: 10 callbacks suppressed
	[ +21.480286] kauditd_printk_skb: 6 callbacks suppressed
	[Apr15 11:18] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.025845] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.177061] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.325365] kauditd_printk_skb: 68 callbacks suppressed
	[  +5.388044] kauditd_printk_skb: 31 callbacks suppressed
	[  +6.294406] kauditd_printk_skb: 27 callbacks suppressed
	[  +6.912688] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.703712] kauditd_printk_skb: 61 callbacks suppressed
	[  +5.383196] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.408081] kauditd_printk_skb: 69 callbacks suppressed
	[Apr15 11:19] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.187654] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.001311] kauditd_printk_skb: 16 callbacks suppressed
	[  +8.344200] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [07942458e669535ca519d59108d32f73bd83a4f2066a5eefedb681719de02ee7] <==
	{"level":"warn","ts":"2024-04-15T11:18:29.02746Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-15T11:18:28.716343Z","time spent":"311.11091ms","remote":"127.0.0.1:42110","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-04-15T11:18:29.027897Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-15T11:18:28.673479Z","time spent":"353.102281ms","remote":"127.0.0.1:42256","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1130 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-04-15T11:18:34.1375Z","caller":"traceutil/trace.go:171","msg":"trace[387349701] transaction","detail":"{read_only:false; response_revision:1179; number_of_response:1; }","duration":"127.131134ms","start":"2024-04-15T11:18:34.010351Z","end":"2024-04-15T11:18:34.137482Z","steps":["trace[387349701] 'process raft request'  (duration: 125.981051ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T11:18:37.064598Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.298699ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/test-pvc\" ","response":"range_response_count:1 size:1412"}
	{"level":"info","ts":"2024-04-15T11:18:37.065369Z","caller":"traceutil/trace.go:171","msg":"trace[208490129] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/test-pvc; range_end:; response_count:1; response_revision:1190; }","duration":"116.130248ms","start":"2024-04-15T11:18:36.94922Z","end":"2024-04-15T11:18:37.06535Z","steps":["trace[208490129] 'range keys from in-memory index tree'  (duration: 115.120313ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T11:18:58.849899Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.373806ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12922673279841022548 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/ranges/serviceips\" mod_revision:1422 > success:<request_put:<key:\"/registry/ranges/serviceips\" value_size:105292 >> failure:<request_range:<key:\"/registry/ranges/serviceips\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-15T11:18:58.850175Z","caller":"traceutil/trace.go:171","msg":"trace[1020265221] transaction","detail":"{read_only:false; response_revision:1430; number_of_response:1; }","duration":"142.348897ms","start":"2024-04-15T11:18:58.707746Z","end":"2024-04-15T11:18:58.850095Z","steps":["trace[1020265221] 'process raft request'  (duration: 142.251476ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T11:18:58.856269Z","caller":"traceutil/trace.go:171","msg":"trace[2001983013] transaction","detail":"{read_only:false; response_revision:1429; number_of_response:1; }","duration":"257.600672ms","start":"2024-04-15T11:18:58.592742Z","end":"2024-04-15T11:18:58.850342Z","steps":["trace[2001983013] 'process raft request'  (duration: 122.51519ms)","trace[2001983013] 'compare'  (duration: 134.237657ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-15T11:18:58.856539Z","caller":"traceutil/trace.go:171","msg":"trace[1125566443] linearizableReadLoop","detail":"{readStateIndex:1479; appliedIndex:1478; }","duration":"165.775011ms","start":"2024-04-15T11:18:58.690753Z","end":"2024-04-15T11:18:58.856528Z","steps":["trace[1125566443] 'read index received'  (duration: 24.513331ms)","trace[1125566443] 'applied index is now lower than readState.Index'  (duration: 141.260733ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-15T11:18:58.856876Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.112167ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:1 size:183"}
	{"level":"info","ts":"2024-04-15T11:18:58.856927Z","caller":"traceutil/trace.go:171","msg":"trace[1425222854] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:1430; }","duration":"166.189483ms","start":"2024-04-15T11:18:58.69073Z","end":"2024-04-15T11:18:58.85692Z","steps":["trace[1425222854] 'agreement among raft nodes before linearized reading'  (duration: 166.074564ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T11:18:58.85704Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.05411ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-15T11:18:58.857079Z","caller":"traceutil/trace.go:171","msg":"trace[577275911] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1430; }","duration":"139.116897ms","start":"2024-04-15T11:18:58.717957Z","end":"2024-04-15T11:18:58.857074Z","steps":["trace[577275911] 'agreement among raft nodes before linearized reading'  (duration: 139.066084ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T11:18:58.860054Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.670512ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/ingress-nginx/ingress-nginx-controller-65496f9567-flvt8.17c6700d729a0eac\" ","response":"range_response_count:1 size:797"}
	{"level":"info","ts":"2024-04-15T11:18:58.860158Z","caller":"traceutil/trace.go:171","msg":"trace[903419469] range","detail":"{range_begin:/registry/events/ingress-nginx/ingress-nginx-controller-65496f9567-flvt8.17c6700d729a0eac; range_end:; response_count:1; response_revision:1430; }","duration":"112.795169ms","start":"2024-04-15T11:18:58.747351Z","end":"2024-04-15T11:18:58.860146Z","steps":["trace[903419469] 'agreement among raft nodes before linearized reading'  (duration: 112.633489ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T11:19:10.904502Z","caller":"traceutil/trace.go:171","msg":"trace[1694915367] linearizableReadLoop","detail":"{readStateIndex:1637; appliedIndex:1636; }","duration":"164.088957ms","start":"2024-04-15T11:19:10.740369Z","end":"2024-04-15T11:19:10.904458Z","steps":["trace[1694915367] 'read index received'  (duration: 162.875509ms)","trace[1694915367] 'applied index is now lower than readState.Index'  (duration: 1.212863ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-15T11:19:10.904695Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.288623ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc-restore\" ","response":"range_response_count:1 size:982"}
	{"level":"info","ts":"2024-04-15T11:19:10.904746Z","caller":"traceutil/trace.go:171","msg":"trace[2129549126] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc-restore; range_end:; response_count:1; response_revision:1584; }","duration":"164.391577ms","start":"2024-04-15T11:19:10.740344Z","end":"2024-04-15T11:19:10.904736Z","steps":["trace[2129549126] 'agreement among raft nodes before linearized reading'  (duration: 164.221131ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T11:19:10.90498Z","caller":"traceutil/trace.go:171","msg":"trace[88196188] transaction","detail":"{read_only:false; response_revision:1584; number_of_response:1; }","duration":"225.472844ms","start":"2024-04-15T11:19:10.679498Z","end":"2024-04-15T11:19:10.904971Z","steps":["trace[88196188] 'process raft request'  (duration: 223.976627ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T11:19:15.625372Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.763416ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/hello-world-app-5d77478584-p2kpw\" ","response":"range_response_count:1 size:3215"}
	{"level":"info","ts":"2024-04-15T11:19:15.626391Z","caller":"traceutil/trace.go:171","msg":"trace[1232388770] range","detail":"{range_begin:/registry/pods/default/hello-world-app-5d77478584-p2kpw; range_end:; response_count:1; response_revision:1654; }","duration":"183.786786ms","start":"2024-04-15T11:19:15.442587Z","end":"2024-04-15T11:19:15.626374Z","steps":["trace[1232388770] 'range keys from in-memory index tree'  (duration: 182.637896ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T11:19:15.625786Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.921862ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:4032"}
	{"level":"info","ts":"2024-04-15T11:19:15.626839Z","caller":"traceutil/trace.go:171","msg":"trace[1990217627] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1654; }","duration":"114.022204ms","start":"2024-04-15T11:19:15.512805Z","end":"2024-04-15T11:19:15.626827Z","steps":["trace[1990217627] 'range keys from in-memory index tree'  (duration: 112.690249ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T11:19:15.625819Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.321018ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses/\" range_end:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-04-15T11:19:15.627334Z","caller":"traceutil/trace.go:171","msg":"trace[68580284] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotclasses/; range_end:/registry/snapshot.storage.k8s.io/volumesnapshotclasses0; response_count:0; response_revision:1654; }","duration":"142.857693ms","start":"2024-04-15T11:19:15.484465Z","end":"2024-04-15T11:19:15.627323Z","steps":["trace[68580284] 'count revisions from in-memory index tree'  (duration: 141.27258ms)"],"step_count":1}
	
	
	==> gcp-auth [71b5f5acc7e447a64ee5d8a8423d10340a6d65196a16fe7dcc2a2d62cf1e58b2] <==
	2024/04/15 11:18:32 GCP Auth Webhook started!
	2024/04/15 11:18:33 Ready to marshal response ...
	2024/04/15 11:18:33 Ready to write response ...
	2024/04/15 11:18:33 Ready to marshal response ...
	2024/04/15 11:18:33 Ready to write response ...
	2024/04/15 11:18:45 Ready to marshal response ...
	2024/04/15 11:18:45 Ready to write response ...
	2024/04/15 11:18:45 Ready to marshal response ...
	2024/04/15 11:18:45 Ready to write response ...
	2024/04/15 11:18:46 Ready to marshal response ...
	2024/04/15 11:18:46 Ready to write response ...
	2024/04/15 11:18:51 Ready to marshal response ...
	2024/04/15 11:18:51 Ready to write response ...
	2024/04/15 11:18:58 Ready to marshal response ...
	2024/04/15 11:18:58 Ready to write response ...
	2024/04/15 11:19:05 Ready to marshal response ...
	2024/04/15 11:19:05 Ready to write response ...
	2024/04/15 11:19:05 Ready to marshal response ...
	2024/04/15 11:19:05 Ready to write response ...
	2024/04/15 11:19:05 Ready to marshal response ...
	2024/04/15 11:19:05 Ready to write response ...
	2024/04/15 11:19:09 Ready to marshal response ...
	2024/04/15 11:19:09 Ready to write response ...
	2024/04/15 11:19:14 Ready to marshal response ...
	2024/04/15 11:19:14 Ready to write response ...
	
	
	==> kernel <==
	 11:19:27 up 3 min,  0 users,  load average: 2.69, 1.43, 0.57
	Linux addons-316289 5.10.207 #1 SMP Thu Apr 11 21:52:14 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [cdafb6ce262007a07aaa23d0e5bee974bb1608e84ec9c3db4de6eca4e1595d6a] <==
	Trace[754380267]:  ---"Txn call completed" 498ms (11:18:25.558)]
	Trace[754380267]: ---"Object stored in database" 499ms (11:18:25.558)
	Trace[754380267]: [512.550789ms] [512.550789ms] END
	E0415 11:18:49.028491       1 upgradeaware.go:425] Error proxying data from client to backend: read tcp 192.168.39.62:8443->10.244.0.25:38000: read: connection reset by peer
	I0415 11:18:58.391321       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0415 11:18:58.882565       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.173.173"}
	I0415 11:18:59.347376       1 handler.go:275] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0415 11:19:00.433252       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0415 11:19:02.499544       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0415 11:19:05.430034       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.74.127"}
	I0415 11:19:06.025411       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0415 11:19:09.604062       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.54.235"}
	I0415 11:19:25.760491       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0415 11:19:25.760560       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0415 11:19:25.793204       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0415 11:19:25.793486       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0415 11:19:25.817522       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0415 11:19:25.817714       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0415 11:19:25.819316       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0415 11:19:25.819415       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0415 11:19:25.849087       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0415 11:19:25.849423       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0415 11:19:26.826472       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0415 11:19:26.850084       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0415 11:19:26.859002       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [f39cc7bdef68cb7b6b9f1022f37dd71c9db422929c6c3dc5f7b65d8f46c3f4fd] <==
	I0415 11:19:09.393553       1 event.go:376] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-p2kpw"
	I0415 11:19:09.424539       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	I0415 11:19:09.431235       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="98.995016ms"
	I0415 11:19:09.507261       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="75.960261ms"
	I0415 11:19:09.507348       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="34.122µs"
	I0415 11:19:09.759800       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0415 11:19:09.759874       1 shared_informer.go:318] Caches are synced for garbage collector
	W0415 11:19:10.280180       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0415 11:19:10.280219       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0415 11:19:12.275575       1 job_controller.go:554] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0415 11:19:12.290230       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-65496f9567" duration="5.613µs"
	I0415 11:19:12.290667       1 job_controller.go:554] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0415 11:19:12.425595       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-5b77dbd7c4" duration="130.846µs"
	I0415 11:19:12.460193       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-5b77dbd7c4" duration="16.844835ms"
	I0415 11:19:12.460721       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-5b77dbd7c4" duration="476.54µs"
	I0415 11:19:14.019885       1 event.go:376] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0415 11:19:15.673094       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="19.01621ms"
	I0415 11:19:15.673997       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="103.78µs"
	I0415 11:19:22.355181       1 namespace_controller.go:182] "Namespace has been deleted" namespace="ingress-nginx"
	W0415 11:19:22.897251       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0415 11:19:22.897287       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0415 11:19:25.895391       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="6.563µs"
	E0415 11:19:26.829876       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E0415 11:19:26.851908       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E0415 11:19:26.860536       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [9d298ffd6ee8a13efb60cef702c02b220e5ed04c83c57cf60ab4a948fdc62715] <==
	I0415 11:17:10.969907       1 server_others.go:72] "Using iptables proxy"
	I0415 11:17:10.986417       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.62"]
	I0415 11:17:11.086396       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0415 11:17:11.086435       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0415 11:17:11.086449       1 server_others.go:168] "Using iptables Proxier"
	I0415 11:17:11.140362       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0415 11:17:11.140652       1 server.go:865] "Version info" version="v1.29.3"
	I0415 11:17:11.140685       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0415 11:17:11.141696       1 config.go:188] "Starting service config controller"
	I0415 11:17:11.141736       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0415 11:17:11.141757       1 config.go:97] "Starting endpoint slice config controller"
	I0415 11:17:11.141761       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0415 11:17:11.152198       1 config.go:315] "Starting node config controller"
	I0415 11:17:11.152339       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0415 11:17:11.343504       1 shared_informer.go:318] Caches are synced for service config
	I0415 11:17:11.343505       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0415 11:17:11.353160       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [1a3ceb1d5cb96e462f690d4cbac2b1646228fb6b77b81f086b7b44ded568a769] <==
	W0415 11:16:52.830491       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0415 11:16:52.830564       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0415 11:16:53.762417       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0415 11:16:53.762699       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0415 11:16:53.805902       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0415 11:16:53.806389       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0415 11:16:53.841007       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0415 11:16:53.841418       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0415 11:16:53.862297       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0415 11:16:53.862647       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0415 11:16:53.898290       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0415 11:16:53.898318       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0415 11:16:54.011584       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0415 11:16:54.011639       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0415 11:16:54.024688       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0415 11:16:54.024741       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0415 11:16:54.033940       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0415 11:16:54.034378       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0415 11:16:54.051702       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0415 11:16:54.052512       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0415 11:16:54.114175       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0415 11:16:54.114524       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0415 11:16:54.160952       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0415 11:16:54.161238       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0415 11:16:56.005422       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 15 11:19:25 addons-316289 kubelet[1244]: I0415 11:19:25.350963    1244 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"device-plugin\" (UniqueName: \"kubernetes.io/host-path/3be294d1-7baf-42a9-984d-a773ddcee738-device-plugin\") pod \"3be294d1-7baf-42a9-984d-a773ddcee738\" (UID: \"3be294d1-7baf-42a9-984d-a773ddcee738\") "
	Apr 15 11:19:25 addons-316289 kubelet[1244]: I0415 11:19:25.351045    1244 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3be294d1-7baf-42a9-984d-a773ddcee738-device-plugin" (OuterVolumeSpecName: "device-plugin") pod "3be294d1-7baf-42a9-984d-a773ddcee738" (UID: "3be294d1-7baf-42a9-984d-a773ddcee738"). InnerVolumeSpecName "device-plugin". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Apr 15 11:19:25 addons-316289 kubelet[1244]: I0415 11:19:25.354201    1244 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3be294d1-7baf-42a9-984d-a773ddcee738-kube-api-access-bbf2m" (OuterVolumeSpecName: "kube-api-access-bbf2m") pod "3be294d1-7baf-42a9-984d-a773ddcee738" (UID: "3be294d1-7baf-42a9-984d-a773ddcee738"). InnerVolumeSpecName "kube-api-access-bbf2m". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 15 11:19:25 addons-316289 kubelet[1244]: I0415 11:19:25.452183    1244 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-bbf2m\" (UniqueName: \"kubernetes.io/projected/3be294d1-7baf-42a9-984d-a773ddcee738-kube-api-access-bbf2m\") on node \"addons-316289\" DevicePath \"\""
	Apr 15 11:19:25 addons-316289 kubelet[1244]: I0415 11:19:25.452216    1244 reconciler_common.go:300] "Volume detached for volume \"device-plugin\" (UniqueName: \"kubernetes.io/host-path/3be294d1-7baf-42a9-984d-a773ddcee738-device-plugin\") on node \"addons-316289\" DevicePath \"\""
	Apr 15 11:19:25 addons-316289 kubelet[1244]: I0415 11:19:25.496915    1244 scope.go:117] "RemoveContainer" containerID="b05e5a105691ac33bdf0dc66e1b2d03989fb1c3e8ebec73a39dedc0c29bf8503"
	Apr 15 11:19:25 addons-316289 kubelet[1244]: I0415 11:19:25.517657    1244 scope.go:117] "RemoveContainer" containerID="b05e5a105691ac33bdf0dc66e1b2d03989fb1c3e8ebec73a39dedc0c29bf8503"
	Apr 15 11:19:25 addons-316289 kubelet[1244]: E0415 11:19:25.518832    1244 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b05e5a105691ac33bdf0dc66e1b2d03989fb1c3e8ebec73a39dedc0c29bf8503\": not found" containerID="b05e5a105691ac33bdf0dc66e1b2d03989fb1c3e8ebec73a39dedc0c29bf8503"
	Apr 15 11:19:25 addons-316289 kubelet[1244]: I0415 11:19:25.519047    1244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b05e5a105691ac33bdf0dc66e1b2d03989fb1c3e8ebec73a39dedc0c29bf8503"} err="failed to get container status \"b05e5a105691ac33bdf0dc66e1b2d03989fb1c3e8ebec73a39dedc0c29bf8503\": rpc error: code = NotFound desc = an error occurred when try to find container \"b05e5a105691ac33bdf0dc66e1b2d03989fb1c3e8ebec73a39dedc0c29bf8503\": not found"
	Apr 15 11:19:26 addons-316289 kubelet[1244]: I0415 11:19:26.094424    1244 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3be294d1-7baf-42a9-984d-a773ddcee738" path="/var/lib/kubelet/pods/3be294d1-7baf-42a9-984d-a773ddcee738/volumes"
	Apr 15 11:19:26 addons-316289 kubelet[1244]: I0415 11:19:26.094873    1244 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62d9add5-c60a-4339-bd52-28f421976fdf" path="/var/lib/kubelet/pods/62d9add5-c60a-4339-bd52-28f421976fdf/volumes"
	Apr 15 11:19:26 addons-316289 kubelet[1244]: I0415 11:19:26.358680    1244 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p8dn\" (UniqueName: \"kubernetes.io/projected/4d6b0dd8-3404-4598-9357-5f0cc39686c8-kube-api-access-7p8dn\") pod \"4d6b0dd8-3404-4598-9357-5f0cc39686c8\" (UID: \"4d6b0dd8-3404-4598-9357-5f0cc39686c8\") "
	Apr 15 11:19:26 addons-316289 kubelet[1244]: I0415 11:19:26.358725    1244 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgvzv\" (UniqueName: \"kubernetes.io/projected/ff95ee05-6125-4749-acb6-b02bb80713ab-kube-api-access-mgvzv\") pod \"ff95ee05-6125-4749-acb6-b02bb80713ab\" (UID: \"ff95ee05-6125-4749-acb6-b02bb80713ab\") "
	Apr 15 11:19:26 addons-316289 kubelet[1244]: I0415 11:19:26.360999    1244 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff95ee05-6125-4749-acb6-b02bb80713ab-kube-api-access-mgvzv" (OuterVolumeSpecName: "kube-api-access-mgvzv") pod "ff95ee05-6125-4749-acb6-b02bb80713ab" (UID: "ff95ee05-6125-4749-acb6-b02bb80713ab"). InnerVolumeSpecName "kube-api-access-mgvzv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 15 11:19:26 addons-316289 kubelet[1244]: I0415 11:19:26.363178    1244 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d6b0dd8-3404-4598-9357-5f0cc39686c8-kube-api-access-7p8dn" (OuterVolumeSpecName: "kube-api-access-7p8dn") pod "4d6b0dd8-3404-4598-9357-5f0cc39686c8" (UID: "4d6b0dd8-3404-4598-9357-5f0cc39686c8"). InnerVolumeSpecName "kube-api-access-7p8dn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 15 11:19:26 addons-316289 kubelet[1244]: I0415 11:19:26.459884    1244 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7p8dn\" (UniqueName: \"kubernetes.io/projected/4d6b0dd8-3404-4598-9357-5f0cc39686c8-kube-api-access-7p8dn\") on node \"addons-316289\" DevicePath \"\""
	Apr 15 11:19:26 addons-316289 kubelet[1244]: I0415 11:19:26.459948    1244 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-mgvzv\" (UniqueName: \"kubernetes.io/projected/ff95ee05-6125-4749-acb6-b02bb80713ab-kube-api-access-mgvzv\") on node \"addons-316289\" DevicePath \"\""
	Apr 15 11:19:26 addons-316289 kubelet[1244]: I0415 11:19:26.504907    1244 scope.go:117] "RemoveContainer" containerID="14f43eb6b0858134fbaa6dd7d8261fdd40511c79da8f988e2d2760cb9cc27ed4"
	Apr 15 11:19:26 addons-316289 kubelet[1244]: I0415 11:19:26.524791    1244 scope.go:117] "RemoveContainer" containerID="14f43eb6b0858134fbaa6dd7d8261fdd40511c79da8f988e2d2760cb9cc27ed4"
	Apr 15 11:19:26 addons-316289 kubelet[1244]: E0415 11:19:26.525442    1244 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"14f43eb6b0858134fbaa6dd7d8261fdd40511c79da8f988e2d2760cb9cc27ed4\": not found" containerID="14f43eb6b0858134fbaa6dd7d8261fdd40511c79da8f988e2d2760cb9cc27ed4"
	Apr 15 11:19:26 addons-316289 kubelet[1244]: I0415 11:19:26.525539    1244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"14f43eb6b0858134fbaa6dd7d8261fdd40511c79da8f988e2d2760cb9cc27ed4"} err="failed to get container status \"14f43eb6b0858134fbaa6dd7d8261fdd40511c79da8f988e2d2760cb9cc27ed4\": rpc error: code = NotFound desc = an error occurred when try to find container \"14f43eb6b0858134fbaa6dd7d8261fdd40511c79da8f988e2d2760cb9cc27ed4\": not found"
	Apr 15 11:19:26 addons-316289 kubelet[1244]: I0415 11:19:26.525581    1244 scope.go:117] "RemoveContainer" containerID="d37a7ef2f317c6c29b7fe7ca638f952abd6771d6cd824fbf63711f740f28f5db"
	Apr 15 11:19:26 addons-316289 kubelet[1244]: I0415 11:19:26.536823    1244 scope.go:117] "RemoveContainer" containerID="d37a7ef2f317c6c29b7fe7ca638f952abd6771d6cd824fbf63711f740f28f5db"
	Apr 15 11:19:26 addons-316289 kubelet[1244]: E0415 11:19:26.554679    1244 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d37a7ef2f317c6c29b7fe7ca638f952abd6771d6cd824fbf63711f740f28f5db\": not found" containerID="d37a7ef2f317c6c29b7fe7ca638f952abd6771d6cd824fbf63711f740f28f5db"
	Apr 15 11:19:26 addons-316289 kubelet[1244]: I0415 11:19:26.554713    1244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d37a7ef2f317c6c29b7fe7ca638f952abd6771d6cd824fbf63711f740f28f5db"} err="failed to get container status \"d37a7ef2f317c6c29b7fe7ca638f952abd6771d6cd824fbf63711f740f28f5db\": rpc error: code = NotFound desc = an error occurred when try to find container \"d37a7ef2f317c6c29b7fe7ca638f952abd6771d6cd824fbf63711f740f28f5db\": not found"
	
	
	==> storage-provisioner [cdb092eabe4799c691409c0c6dffd87d67193433c3d4a45ac24f73c9b290e71e] <==
	I0415 11:17:17.529303       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0415 11:17:17.553401       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0415 11:17:17.553479       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0415 11:17:17.576602       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0415 11:17:17.577753       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-316289_d8524260-94e7-4405-8f84-e014920e3781!
	I0415 11:17:17.581989       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2d3adc6b-4a2a-4d2b-a502-8e0327eb163a", APIVersion:"v1", ResourceVersion:"602", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-316289_d8524260-94e7-4405-8f84-e014920e3781 became leader
	I0415 11:17:17.678978       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-316289_d8524260-94e7-4405-8f84-e014920e3781!
	E0415 11:19:08.553683       1 controller.go:1050] claim "9d91cbf7-9ab7-4ae2-ab53-d73730d96bbe" in work queue no longer exists
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-316289 -n addons-316289
helpers_test.go:261: (dbg) Run:  kubectl --context addons-316289 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/CSI FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/CSI (54.63s)

                                                
                                    

Test pass (293/333)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 44.98
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.15
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.29.3/json-events 11.7
13 TestDownloadOnly/v1.29.3/preload-exists 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.08
18 TestDownloadOnly/v1.29.3/DeleteAll 0.15
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.30.0-rc.2/json-events 42.15
22 TestDownloadOnly/v1.30.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.30.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.30.0-rc.2/DeleteAll 0.15
28 TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.59
31 TestOffline 100.6
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 140.48
38 TestAddons/parallel/Registry 19.8
39 TestAddons/parallel/Ingress 21.33
40 TestAddons/parallel/InspektorGadget 11.18
41 TestAddons/parallel/MetricsServer 6
42 TestAddons/parallel/HelmTiller 11.74
45 TestAddons/parallel/Headlamp 12.95
46 TestAddons/parallel/CloudSpanner 6.76
47 TestAddons/parallel/LocalPath 56.47
48 TestAddons/parallel/NvidiaDevicePlugin 5.62
49 TestAddons/parallel/Yakd 5.01
52 TestAddons/serial/GCPAuth/Namespaces 0.12
53 TestAddons/StoppedEnableDisable 92.8
54 TestCertOptions 57.14
55 TestCertExpiration 309.66
57 TestForceSystemdFlag 47.55
58 TestForceSystemdEnv 69.88
60 TestKVMDriverInstallOrUpdate 4.29
64 TestErrorSpam/setup 46.79
65 TestErrorSpam/start 0.41
66 TestErrorSpam/status 0.79
67 TestErrorSpam/pause 1.69
68 TestErrorSpam/unpause 1.7
69 TestErrorSpam/stop 4.9
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 59.98
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 45.29
76 TestFunctional/serial/KubeContext 0.05
77 TestFunctional/serial/KubectlGetPods 0.08
80 TestFunctional/serial/CacheCmd/cache/add_remote 4.04
81 TestFunctional/serial/CacheCmd/cache/add_local 2.36
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.89
86 TestFunctional/serial/CacheCmd/cache/delete 0.13
87 TestFunctional/serial/MinikubeKubectlCmd 0.12
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
89 TestFunctional/serial/ExtraConfig 38.51
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.41
92 TestFunctional/serial/LogsFileCmd 1.37
93 TestFunctional/serial/InvalidService 3.64
95 TestFunctional/parallel/ConfigCmd 0.43
96 TestFunctional/parallel/DashboardCmd 31.54
97 TestFunctional/parallel/DryRun 0.31
98 TestFunctional/parallel/InternationalLanguage 0.16
99 TestFunctional/parallel/StatusCmd 0.83
103 TestFunctional/parallel/ServiceCmdConnect 24.62
104 TestFunctional/parallel/AddonsCmd 0.16
105 TestFunctional/parallel/PersistentVolumeClaim 56.15
107 TestFunctional/parallel/SSHCmd 0.45
108 TestFunctional/parallel/CpCmd 1.45
109 TestFunctional/parallel/MySQL 28.08
110 TestFunctional/parallel/FileSync 0.24
111 TestFunctional/parallel/CertSync 1.45
115 TestFunctional/parallel/NodeLabels 0.08
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.41
119 TestFunctional/parallel/License 0.64
129 TestFunctional/parallel/ServiceCmd/DeployApp 10.21
130 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
131 TestFunctional/parallel/ProfileCmd/profile_list 0.31
132 TestFunctional/parallel/ProfileCmd/profile_json_output 0.34
133 TestFunctional/parallel/MountCmd/any-port 7.34
134 TestFunctional/parallel/MountCmd/specific-port 1.87
135 TestFunctional/parallel/ServiceCmd/List 0.31
136 TestFunctional/parallel/MountCmd/VerifyCleanup 1.56
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.27
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.29
139 TestFunctional/parallel/ServiceCmd/Format 0.32
140 TestFunctional/parallel/ServiceCmd/URL 0.3
141 TestFunctional/parallel/Version/short 0.07
142 TestFunctional/parallel/Version/components 0.79
143 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
144 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
145 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
146 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
147 TestFunctional/parallel/ImageCommands/ImageBuild 4.14
148 TestFunctional/parallel/ImageCommands/Setup 2.06
149 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.41
150 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.28
151 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.88
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
155 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.06
156 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
157 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.53
158 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.09
159 TestFunctional/delete_addon-resizer_images 0.07
160 TestFunctional/delete_my-image_image 0.02
161 TestFunctional/delete_minikube_cached_images 0.01
165 TestMultiControlPlane/serial/StartCluster 213.69
166 TestMultiControlPlane/serial/DeployApp 6.81
167 TestMultiControlPlane/serial/PingHostFromPods 1.39
168 TestMultiControlPlane/serial/AddWorkerNode 46.38
169 TestMultiControlPlane/serial/NodeLabels 0.07
170 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.57
171 TestMultiControlPlane/serial/CopyFile 14.11
172 TestMultiControlPlane/serial/StopSecondaryNode 92.5
173 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.43
174 TestMultiControlPlane/serial/RestartSecondaryNode 44.84
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.57
176 TestMultiControlPlane/serial/RestartClusterKeepsNodes 446.42
177 TestMultiControlPlane/serial/DeleteSecondaryNode 7.83
178 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.42
179 TestMultiControlPlane/serial/StopCluster 275.72
180 TestMultiControlPlane/serial/RestartCluster 155.28
181 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.39
182 TestMultiControlPlane/serial/AddSecondaryNode 71.99
183 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.57
187 TestJSONOutput/start/Command 98.54
188 TestJSONOutput/start/Audit 0
190 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/pause/Command 0.74
194 TestJSONOutput/pause/Audit 0
196 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/unpause/Command 0.65
200 TestJSONOutput/unpause/Audit 0
202 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/stop/Command 6.71
206 TestJSONOutput/stop/Audit 0
208 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
210 TestErrorJSONOutput 0.23
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 89.96
219 TestMountStart/serial/StartWithMountFirst 29.92
220 TestMountStart/serial/VerifyMountFirst 0.41
221 TestMountStart/serial/StartWithMountSecond 29.28
222 TestMountStart/serial/VerifyMountSecond 0.4
223 TestMountStart/serial/DeleteFirst 0.73
224 TestMountStart/serial/VerifyMountPostDelete 0.41
225 TestMountStart/serial/Stop 1.39
226 TestMountStart/serial/RestartStopped 23.07
227 TestMountStart/serial/VerifyMountPostStop 0.42
230 TestMultiNode/serial/FreshStart2Nodes 133.49
231 TestMultiNode/serial/DeployApp2Nodes 5.28
232 TestMultiNode/serial/PingHostFrom2Pods 0.87
233 TestMultiNode/serial/AddNode 39.91
234 TestMultiNode/serial/MultiNodeLabels 0.07
235 TestMultiNode/serial/ProfileList 0.24
236 TestMultiNode/serial/CopyFile 7.9
237 TestMultiNode/serial/StopNode 2.39
238 TestMultiNode/serial/StartAfterStop 26.27
239 TestMultiNode/serial/RestartKeepsNodes 291.79
240 TestMultiNode/serial/DeleteNode 2.42
241 TestMultiNode/serial/StopMultiNode 183.47
242 TestMultiNode/serial/RestartMultiNode 78.29
243 TestMultiNode/serial/ValidateNameConflict 48.51
248 TestPreload 312.64
250 TestScheduledStopUnix 117.85
254 TestRunningBinaryUpgrade 226.51
256 TestKubernetesUpgrade 179.46
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 95.39
268 TestNetworkPlugins/group/false 5
273 TestPause/serial/Start 76.05
274 TestNoKubernetes/serial/StartWithStopK8s 51.98
282 TestNoKubernetes/serial/Start 35.22
283 TestPause/serial/SecondStartNoReconfiguration 57.31
284 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
285 TestNoKubernetes/serial/ProfileList 1.62
286 TestNoKubernetes/serial/Stop 1.45
287 TestNoKubernetes/serial/StartNoArgs 23.82
288 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
289 TestPause/serial/Pause 0.84
290 TestPause/serial/VerifyStatus 0.32
291 TestPause/serial/Unpause 1
292 TestPause/serial/PauseAgain 0.9
293 TestPause/serial/DeletePaused 1.87
294 TestPause/serial/VerifyDeletedResources 0.28
295 TestStoppedBinaryUpgrade/Setup 2.29
296 TestStoppedBinaryUpgrade/Upgrade 172.14
297 TestNetworkPlugins/group/auto/Start 127.53
298 TestNetworkPlugins/group/kindnet/Start 63.96
299 TestNetworkPlugins/group/calico/Start 117.67
300 TestNetworkPlugins/group/auto/KubeletFlags 0.24
301 TestNetworkPlugins/group/auto/NetCatPod 10.28
302 TestNetworkPlugins/group/auto/DNS 0.17
303 TestNetworkPlugins/group/auto/Localhost 0.13
304 TestNetworkPlugins/group/auto/HairPin 0.14
305 TestStoppedBinaryUpgrade/MinikubeLogs 1.19
306 TestNetworkPlugins/group/custom-flannel/Start 110.94
307 TestNetworkPlugins/group/enable-default-cni/Start 147.76
308 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
309 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
310 TestNetworkPlugins/group/kindnet/NetCatPod 10.26
311 TestNetworkPlugins/group/kindnet/DNS 0.22
312 TestNetworkPlugins/group/kindnet/Localhost 0.15
313 TestNetworkPlugins/group/kindnet/HairPin 0.17
314 TestNetworkPlugins/group/flannel/Start 98.12
315 TestNetworkPlugins/group/calico/ControllerPod 6.01
316 TestNetworkPlugins/group/calico/KubeletFlags 0.24
317 TestNetworkPlugins/group/calico/NetCatPod 11.23
318 TestNetworkPlugins/group/calico/DNS 0.25
319 TestNetworkPlugins/group/calico/Localhost 0.19
320 TestNetworkPlugins/group/calico/HairPin 0.17
321 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
322 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.32
323 TestNetworkPlugins/group/custom-flannel/DNS 0.19
324 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
325 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
326 TestNetworkPlugins/group/bridge/Start 101.65
328 TestStartStop/group/old-k8s-version/serial/FirstStart 200.07
329 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
330 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.26
331 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
332 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
333 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
334 TestNetworkPlugins/group/flannel/ControllerPod 6.01
335 TestNetworkPlugins/group/flannel/KubeletFlags 0.25
336 TestNetworkPlugins/group/flannel/NetCatPod 10.3
338 TestStartStop/group/no-preload/serial/FirstStart 132
339 TestNetworkPlugins/group/flannel/DNS 0.2
340 TestNetworkPlugins/group/flannel/Localhost 0.17
341 TestNetworkPlugins/group/flannel/HairPin 0.18
343 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 71.97
344 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
345 TestNetworkPlugins/group/bridge/NetCatPod 11.27
346 TestNetworkPlugins/group/bridge/DNS 0.2
347 TestNetworkPlugins/group/bridge/Localhost 0.18
348 TestNetworkPlugins/group/bridge/HairPin 0.15
350 TestStartStop/group/newest-cni/serial/FirstStart 58.55
351 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.35
352 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.27
353 TestStartStop/group/default-k8s-diff-port/serial/Stop 92.55
354 TestStartStop/group/no-preload/serial/DeployApp 9.31
355 TestStartStop/group/newest-cni/serial/DeployApp 0
356 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.13
357 TestStartStop/group/newest-cni/serial/Stop 7.37
358 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.08
359 TestStartStop/group/no-preload/serial/Stop 92.54
360 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
361 TestStartStop/group/newest-cni/serial/SecondStart 36.38
362 TestStartStop/group/old-k8s-version/serial/DeployApp 9.44
363 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.09
364 TestStartStop/group/old-k8s-version/serial/Stop 92.55
365 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
366 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
367 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
368 TestStartStop/group/newest-cni/serial/Pause 2.51
370 TestStartStop/group/embed-certs/serial/FirstStart 100.43
371 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
372 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 322.93
373 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.24
374 TestStartStop/group/no-preload/serial/SecondStart 322.77
375 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.25
376 TestStartStop/group/old-k8s-version/serial/SecondStart 199.31
377 TestStartStop/group/embed-certs/serial/DeployApp 10.33
378 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.29
379 TestStartStop/group/embed-certs/serial/Stop 92.52
380 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.3
381 TestStartStop/group/embed-certs/serial/SecondStart 316.53
382 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
383 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
384 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
385 TestStartStop/group/old-k8s-version/serial/Pause 2.87
386 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 17.01
387 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
388 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
389 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.84
390 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
391 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.08
392 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
393 TestStartStop/group/no-preload/serial/Pause 2.72
394 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 11.01
395 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
396 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
397 TestStartStop/group/embed-certs/serial/Pause 2.61
x
+
TestDownloadOnly/v1.20.0/json-events (44.98s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-974926 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-974926 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (44.983449413s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (44.98s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-974926
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-974926: exit status 85 (78.500912ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-974926 | jenkins | v1.33.0-beta.0 | 15 Apr 24 11:14 UTC |          |
	|         | -p download-only-974926        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |          |
	|         | --container-runtime=containerd |                      |         |                |                     |          |
	|         | --driver=kvm2                  |                      |         |                |                     |          |
	|         | --container-runtime=containerd |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 11:14:32
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 11:14:32.109993  361841 out.go:291] Setting OutFile to fd 1 ...
	I0415 11:14:32.110290  361841 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:14:32.110303  361841 out.go:304] Setting ErrFile to fd 2...
	I0415 11:14:32.110307  361841 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:14:32.110570  361841 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18644-354432/.minikube/bin
	W0415 11:14:32.110733  361841 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18644-354432/.minikube/config/config.json: open /home/jenkins/minikube-integration/18644-354432/.minikube/config/config.json: no such file or directory
	I0415 11:14:32.111431  361841 out.go:298] Setting JSON to true
	I0415 11:14:32.112478  361841 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3415,"bootTime":1713176257,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0415 11:14:32.112560  361841 start.go:139] virtualization: kvm guest
	I0415 11:14:32.115441  361841 out.go:97] [download-only-974926] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	W0415 11:14:32.115533  361841 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18644-354432/.minikube/cache/preloaded-tarball: no such file or directory
	I0415 11:14:32.117409  361841 out.go:169] MINIKUBE_LOCATION=18644
	I0415 11:14:32.115706  361841 notify.go:220] Checking for updates...
	I0415 11:14:32.120540  361841 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 11:14:32.122178  361841 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18644-354432/kubeconfig
	I0415 11:14:32.123839  361841 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18644-354432/.minikube
	I0415 11:14:32.125367  361841 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0415 11:14:32.127939  361841 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0415 11:14:32.128203  361841 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 11:14:32.162904  361841 out.go:97] Using the kvm2 driver based on user configuration
	I0415 11:14:32.162933  361841 start.go:297] selected driver: kvm2
	I0415 11:14:32.162944  361841 start.go:901] validating driver "kvm2" against <nil>
	I0415 11:14:32.163323  361841 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 11:14:32.163437  361841 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18644-354432/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0415 11:14:32.179157  361841 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0415 11:14:32.179234  361841 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 11:14:32.179768  361841 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0415 11:14:32.179967  361841 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 11:14:32.180057  361841 cni.go:84] Creating CNI manager for ""
	I0415 11:14:32.180077  361841 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0415 11:14:32.180087  361841 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 11:14:32.180164  361841 start.go:340] cluster config:
	{Name:download-only-974926 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-974926 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 11:14:32.180369  361841 iso.go:125] acquiring lock: {Name:mk9a0fa1d69df45a672e90a0ca39f76901edf3d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 11:14:32.182248  361841 out.go:97] Downloading VM boot image ...
	I0415 11:14:32.182281  361841 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18644-354432/.minikube/cache/iso/amd64/minikube-v1.33.0-1712854267-18621-amd64.iso
	I0415 11:14:40.862913  361841 out.go:97] Starting "download-only-974926" primary control-plane node in "download-only-974926" cluster
	I0415 11:14:40.862950  361841 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0415 11:14:40.962460  361841 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0415 11:14:40.962507  361841 cache.go:56] Caching tarball of preloaded images
	I0415 11:14:40.962673  361841 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0415 11:14:40.964926  361841 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0415 11:14:40.964959  361841 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0415 11:14:41.064983  361841 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/18644-354432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0415 11:14:53.016833  361841 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0415 11:14:53.016925  361841 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18644-354432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0415 11:14:53.918466  361841 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0415 11:14:53.918836  361841 profile.go:143] Saving config to /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/download-only-974926/config.json ...
	I0415 11:14:53.918879  361841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/download-only-974926/config.json: {Name:mk30e3bd8c40e5a5a288ff3275d6e3692bef7f7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 11:14:53.919071  361841 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0415 11:14:53.919287  361841 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18644-354432/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-974926 host does not exist
	  To start a cluster, run: "minikube start -p download-only-974926"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-974926
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (11.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-052198 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-052198 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (11.698606625s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (11.70s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-052198
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-052198: exit status 85 (79.52811ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-974926 | jenkins | v1.33.0-beta.0 | 15 Apr 24 11:14 UTC |                     |
	|         | -p download-only-974926        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	|         | --driver=kvm2                  |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 11:15 UTC | 15 Apr 24 11:15 UTC |
	| delete  | -p download-only-974926        | download-only-974926 | jenkins | v1.33.0-beta.0 | 15 Apr 24 11:15 UTC | 15 Apr 24 11:15 UTC |
	| start   | -o=json --download-only        | download-only-052198 | jenkins | v1.33.0-beta.0 | 15 Apr 24 11:15 UTC |                     |
	|         | -p download-only-052198        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	|         | --driver=kvm2                  |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 11:15:17
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 11:15:17.468166  362138 out.go:291] Setting OutFile to fd 1 ...
	I0415 11:15:17.468440  362138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:15:17.468450  362138 out.go:304] Setting ErrFile to fd 2...
	I0415 11:15:17.468455  362138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:15:17.468681  362138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18644-354432/.minikube/bin
	I0415 11:15:17.469335  362138 out.go:298] Setting JSON to true
	I0415 11:15:17.470358  362138 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3461,"bootTime":1713176257,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0415 11:15:17.470429  362138 start.go:139] virtualization: kvm guest
	I0415 11:15:17.472761  362138 out.go:97] [download-only-052198] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0415 11:15:17.474684  362138 out.go:169] MINIKUBE_LOCATION=18644
	I0415 11:15:17.473030  362138 notify.go:220] Checking for updates...
	I0415 11:15:17.477581  362138 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 11:15:17.479066  362138 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18644-354432/kubeconfig
	I0415 11:15:17.480425  362138 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18644-354432/.minikube
	I0415 11:15:17.482036  362138 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0415 11:15:17.485002  362138 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0415 11:15:17.485301  362138 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 11:15:17.519227  362138 out.go:97] Using the kvm2 driver based on user configuration
	I0415 11:15:17.519266  362138 start.go:297] selected driver: kvm2
	I0415 11:15:17.519272  362138 start.go:901] validating driver "kvm2" against <nil>
	I0415 11:15:17.519658  362138 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 11:15:17.519768  362138 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18644-354432/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0415 11:15:17.535987  362138 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0415 11:15:17.536051  362138 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 11:15:17.536793  362138 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0415 11:15:17.537036  362138 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 11:15:17.537130  362138 cni.go:84] Creating CNI manager for ""
	I0415 11:15:17.537146  362138 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0415 11:15:17.537158  362138 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 11:15:17.537240  362138 start.go:340] cluster config:
	{Name:download-only-052198 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:download-only-052198 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 11:15:17.537419  362138 iso.go:125] acquiring lock: {Name:mk9a0fa1d69df45a672e90a0ca39f76901edf3d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 11:15:17.539338  362138 out.go:97] Starting "download-only-052198" primary control-plane node in "download-only-052198" cluster
	I0415 11:15:17.539365  362138 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0415 11:15:17.638964  362138 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4
	I0415 11:15:17.638999  362138 cache.go:56] Caching tarball of preloaded images
	I0415 11:15:17.639199  362138 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0415 11:15:17.641196  362138 out.go:97] Downloading Kubernetes v1.29.3 preload ...
	I0415 11:15:17.641217  362138 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4 ...
	I0415 11:15:17.757421  362138 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4?checksum=md5:dcad3363f354722395d68e96a1f5de54 -> /home/jenkins/minikube-integration/18644-354432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4
	I0415 11:15:27.379686  362138 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4 ...
	I0415 11:15:27.379834  362138 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18644-354432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-052198 host does not exist
	  To start a cluster, run: "minikube start -p download-only-052198"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-052198
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/json-events (42.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-068316 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-068316 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (42.144613034s)
--- PASS: TestDownloadOnly/v1.30.0-rc.2/json-events (42.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-068316
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-068316: exit status 85 (80.043267ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-974926 | jenkins | v1.33.0-beta.0 | 15 Apr 24 11:14 UTC |                     |
	|         | -p download-only-974926           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |                |                     |                     |
	|         | --container-runtime=containerd    |                      |         |                |                     |                     |
	|         | --driver=kvm2                     |                      |         |                |                     |                     |
	|         | --container-runtime=containerd    |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 11:15 UTC | 15 Apr 24 11:15 UTC |
	| delete  | -p download-only-974926           | download-only-974926 | jenkins | v1.33.0-beta.0 | 15 Apr 24 11:15 UTC | 15 Apr 24 11:15 UTC |
	| start   | -o=json --download-only           | download-only-052198 | jenkins | v1.33.0-beta.0 | 15 Apr 24 11:15 UTC |                     |
	|         | -p download-only-052198           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3      |                      |         |                |                     |                     |
	|         | --container-runtime=containerd    |                      |         |                |                     |                     |
	|         | --driver=kvm2                     |                      |         |                |                     |                     |
	|         | --container-runtime=containerd    |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 11:15 UTC | 15 Apr 24 11:15 UTC |
	| delete  | -p download-only-052198           | download-only-052198 | jenkins | v1.33.0-beta.0 | 15 Apr 24 11:15 UTC | 15 Apr 24 11:15 UTC |
	| start   | -o=json --download-only           | download-only-068316 | jenkins | v1.33.0-beta.0 | 15 Apr 24 11:15 UTC |                     |
	|         | -p download-only-068316           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2 |                      |         |                |                     |                     |
	|         | --container-runtime=containerd    |                      |         |                |                     |                     |
	|         | --driver=kvm2                     |                      |         |                |                     |                     |
	|         | --container-runtime=containerd    |                      |         |                |                     |                     |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 11:15:29
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 11:15:29.535556  362332 out.go:291] Setting OutFile to fd 1 ...
	I0415 11:15:29.535705  362332 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:15:29.535717  362332 out.go:304] Setting ErrFile to fd 2...
	I0415 11:15:29.535724  362332 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:15:29.535926  362332 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18644-354432/.minikube/bin
	I0415 11:15:29.536538  362332 out.go:298] Setting JSON to true
	I0415 11:15:29.537527  362332 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3473,"bootTime":1713176257,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0415 11:15:29.537607  362332 start.go:139] virtualization: kvm guest
	I0415 11:15:29.539992  362332 out.go:97] [download-only-068316] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0415 11:15:29.540137  362332 notify.go:220] Checking for updates...
	I0415 11:15:29.541962  362332 out.go:169] MINIKUBE_LOCATION=18644
	I0415 11:15:29.543477  362332 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 11:15:29.545088  362332 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18644-354432/kubeconfig
	I0415 11:15:29.546399  362332 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18644-354432/.minikube
	I0415 11:15:29.547948  362332 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0415 11:15:29.550744  362332 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0415 11:15:29.551024  362332 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 11:15:29.583894  362332 out.go:97] Using the kvm2 driver based on user configuration
	I0415 11:15:29.583926  362332 start.go:297] selected driver: kvm2
	I0415 11:15:29.583931  362332 start.go:901] validating driver "kvm2" against <nil>
	I0415 11:15:29.584271  362332 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 11:15:29.584373  362332 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18644-354432/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0415 11:15:29.599613  362332 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0415 11:15:29.599700  362332 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 11:15:29.600251  362332 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0415 11:15:29.600429  362332 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 11:15:29.600528  362332 cni.go:84] Creating CNI manager for ""
	I0415 11:15:29.600545  362332 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0415 11:15:29.600557  362332 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 11:15:29.600635  362332 start.go:340] cluster config:
	{Name:download-only-068316 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:download-only-068316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}

                                                
                                                
	I0415 11:15:29.600767  362332 iso.go:125] acquiring lock: {Name:mk9a0fa1d69df45a672e90a0ca39f76901edf3d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 11:15:29.602736  362332 out.go:97] Starting "download-only-068316" primary control-plane node in "download-only-068316" cluster
	I0415 11:15:29.602768  362332 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime containerd
	I0415 11:15:29.697685  362332 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.2/preloaded-images-k8s-v18-v1.30.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I0415 11:15:29.697724  362332 cache.go:56] Caching tarball of preloaded images
	I0415 11:15:29.697933  362332 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime containerd
	I0415 11:15:29.700017  362332 out.go:97] Downloading Kubernetes v1.30.0-rc.2 preload ...
	I0415 11:15:29.700045  362332 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0415 11:15:29.798177  362332 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.2/preloaded-images-k8s-v18-v1.30.0-rc.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:dfcc3b0407e077e710ff902e47acd662 -> /home/jenkins/minikube-integration/18644-354432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I0415 11:15:39.803713  362332 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0415 11:15:39.804513  362332 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18644-354432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0415 11:15:40.556077  362332 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.2 on containerd
	I0415 11:15:40.556450  362332 profile.go:143] Saving config to /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/download-only-068316/config.json ...
	I0415 11:15:40.556482  362332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/download-only-068316/config.json: {Name:mk3dafaef058a365dad47c2bc97ea1d55ddff32a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 11:15:40.556650  362332 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime containerd
	I0415 11:15:40.556822  362332 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18644-354432/.minikube/cache/linux/amd64/v1.30.0-rc.2/kubectl
	
	
	* The control-plane node download-only-068316 host does not exist
	  To start a cluster, run: "minikube start -p download-only-068316"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0-rc.2/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-068316
--- PASS: TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-720489 --alsologtostderr --binary-mirror http://127.0.0.1:42033 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-720489" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-720489
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (100.6s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-248594 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-248594 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m38.858457554s)
helpers_test.go:175: Cleaning up "offline-containerd-248594" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-248594
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-248594: (1.744141928s)
--- PASS: TestOffline (100.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-316289
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-316289: exit status 85 (70.6013ms)

                                                
                                                
-- stdout --
	* Profile "addons-316289" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-316289"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-316289
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-316289: exit status 85 (69.198812ms)

                                                
                                                
-- stdout --
	* Profile "addons-316289" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-316289"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (140.48s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-316289 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-316289 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m20.484577787s)
--- PASS: TestAddons/Setup (140.48s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 22.598243ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-m6lkv" [e6935040-f766-47e5-bd50-a5be1079e707] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.006123722s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-vjpvs" [5f2cf2a0-2836-4ea8-8409-f3af4a3baac7] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.00599056s
addons_test.go:340: (dbg) Run:  kubectl --context addons-316289 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-316289 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-316289 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.922085286s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-316289 ip
2024/04/15 11:18:52 [DEBUG] GET http://192.168.39.62:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-316289 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (19.80s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-316289 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-316289 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-316289 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [21361bdc-37ee-49ed-af8b-1cf8d52aea60] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [21361bdc-37ee-49ed-af8b-1cf8d52aea60] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.050255664s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-316289 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-316289 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-316289 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.62
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-316289 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-316289 addons disable ingress-dns --alsologtostderr -v=1: (1.480764493s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-316289 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-316289 addons disable ingress --alsologtostderr -v=1: (7.975864842s)
--- PASS: TestAddons/parallel/Ingress (21.33s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.18s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-lzh9m" [924cbc6f-6686-456d-b6be-eb5110035189] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.013582793s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-316289
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-316289: (6.166202196s)
--- PASS: TestAddons/parallel/InspektorGadget (11.18s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 3.467435ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-75d6c48ddd-gkzzk" [90826a2e-0cae-4a73-9caf-e74d8d966b44] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006112396s
addons_test.go:415: (dbg) Run:  kubectl --context addons-316289 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-316289 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.74s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 4.02926ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-qltlq" [54865dad-a845-41ed-97ae-b6f5ef0ba018] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.006327846s
addons_test.go:473: (dbg) Run:  kubectl --context addons-316289 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-316289 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.026425501s)
addons_test.go:478: kubectl --context addons-316289 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-316289 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.74s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-316289 --alsologtostderr -v=1
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5b77dbd7c4-tpqsd" [92b0a84b-3251-4617-a2b0-05195fde8a5e] Pending
helpers_test.go:344: "headlamp-5b77dbd7c4-tpqsd" [92b0a84b-3251-4617-a2b0-05195fde8a5e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5b77dbd7c4-tpqsd" [92b0a84b-3251-4617-a2b0-05195fde8a5e] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.005193169s
--- PASS: TestAddons/parallel/Headlamp (12.95s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.76s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5446596998-thzgj" [f31ff5e8-2978-4abc-9b71-232867e59868] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004964803s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-316289
--- PASS: TestAddons/parallel/CloudSpanner (6.76s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.47s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-316289 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-316289 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-316289 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-316289 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-316289 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-316289 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-316289 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-316289 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-316289 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-316289 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [91bf21c1-fd38-49c0-a1df-35852d99c009] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [91bf21c1-fd38-49c0-a1df-35852d99c009] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [91bf21c1-fd38-49c0-a1df-35852d99c009] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004492185s
addons_test.go:891: (dbg) Run:  kubectl --context addons-316289 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-316289 ssh "cat /opt/local-path-provisioner/pvc-eeb96ba2-dac8-4abf-bab1-600492ef2421_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-316289 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-316289 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-316289 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-316289 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.546260666s)
--- PASS: TestAddons/parallel/LocalPath (56.47s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.62s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-shr4w" [3be294d1-7baf-42a9-984d-a773ddcee738] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005884906s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-316289
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.62s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-89vjq" [85f0b059-a31a-4962-af68-4c2a291959c4] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004691124s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-316289 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-316289 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.8s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-316289
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-316289: (1m32.462019283s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-316289
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-316289
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-316289
--- PASS: TestAddons/StoppedEnableDisable (92.80s)

                                                
                                    
x
+
TestCertOptions (57.14s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-782842 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-782842 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (55.585024589s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-782842 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-782842 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-782842 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-782842" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-782842
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-782842: (1.036696884s)
--- PASS: TestCertOptions (57.14s)

                                                
                                    
x
+
TestCertExpiration (309.66s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-445055 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-445055 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m16.111538808s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-445055 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-445055 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (52.52408779s)
helpers_test.go:175: Cleaning up "cert-expiration-445055" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-445055
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-445055: (1.027695049s)
--- PASS: TestCertExpiration (309.66s)

                                                
                                    
x
+
TestForceSystemdFlag (47.55s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-117977 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-117977 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (46.30536929s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-117977 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-117977" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-117977
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-117977: (1.029788164s)
--- PASS: TestForceSystemdFlag (47.55s)

                                                
                                    
x
+
TestForceSystemdEnv (69.88s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-266925 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-266925 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m8.657783494s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-266925 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-266925" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-266925
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-266925: (1.004136929s)
--- PASS: TestForceSystemdEnv (69.88s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.29s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.29s)

                                                
                                    
x
+
TestErrorSpam/setup (46.79s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-990231 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-990231 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-990231 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-990231 --driver=kvm2  --container-runtime=containerd: (46.786457577s)
--- PASS: TestErrorSpam/setup (46.79s)

                                                
                                    
x
+
TestErrorSpam/start (0.41s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-990231 --log_dir /tmp/nospam-990231 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-990231 --log_dir /tmp/nospam-990231 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-990231 --log_dir /tmp/nospam-990231 start --dry-run
--- PASS: TestErrorSpam/start (0.41s)

                                                
                                    
x
+
TestErrorSpam/status (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-990231 --log_dir /tmp/nospam-990231 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-990231 --log_dir /tmp/nospam-990231 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-990231 --log_dir /tmp/nospam-990231 status
--- PASS: TestErrorSpam/status (0.79s)

                                                
                                    
x
+
TestErrorSpam/pause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-990231 --log_dir /tmp/nospam-990231 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-990231 --log_dir /tmp/nospam-990231 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-990231 --log_dir /tmp/nospam-990231 pause
--- PASS: TestErrorSpam/pause (1.69s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-990231 --log_dir /tmp/nospam-990231 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-990231 --log_dir /tmp/nospam-990231 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-990231 --log_dir /tmp/nospam-990231 unpause
--- PASS: TestErrorSpam/unpause (1.70s)

                                                
                                    
x
+
TestErrorSpam/stop (4.9s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-990231 --log_dir /tmp/nospam-990231 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-990231 --log_dir /tmp/nospam-990231 stop: (1.519600703s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-990231 --log_dir /tmp/nospam-990231 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-990231 --log_dir /tmp/nospam-990231 stop: (1.901754972s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-990231 --log_dir /tmp/nospam-990231 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-990231 --log_dir /tmp/nospam-990231 stop: (1.478541938s)
--- PASS: TestErrorSpam/stop (4.90s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18644-354432/.minikube/files/etc/test/nested/copy/361829/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (59.98s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-042762 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-042762 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (59.974910636s)
--- PASS: TestFunctional/serial/StartWithProxy (59.98s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (45.29s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-042762 --alsologtostderr -v=8
E0415 11:23:33.602674  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.crt: no such file or directory
E0415 11:23:33.608544  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.crt: no such file or directory
E0415 11:23:33.618837  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.crt: no such file or directory
E0415 11:23:33.639189  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.crt: no such file or directory
E0415 11:23:33.679575  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.crt: no such file or directory
E0415 11:23:33.759979  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.crt: no such file or directory
E0415 11:23:33.920463  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.crt: no such file or directory
E0415 11:23:34.241156  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.crt: no such file or directory
E0415 11:23:34.882165  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.crt: no such file or directory
E0415 11:23:36.162708  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.crt: no such file or directory
E0415 11:23:38.724301  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.crt: no such file or directory
E0415 11:23:43.845009  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.crt: no such file or directory
E0415 11:23:54.085372  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-042762 --alsologtostderr -v=8: (45.290759793s)
functional_test.go:659: soft start took 45.291660163s for "functional-042762" cluster.
--- PASS: TestFunctional/serial/SoftStart (45.29s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-042762 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-042762 cache add registry.k8s.io/pause:3.1: (1.353224515s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-042762 cache add registry.k8s.io/pause:3.3: (1.394109069s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-042762 cache add registry.k8s.io/pause:latest: (1.294338341s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-042762 /tmp/TestFunctionalserialCacheCmdcacheadd_local2463640784/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 cache add minikube-local-cache-test:functional-042762
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-042762 cache add minikube-local-cache-test:functional-042762: (1.934579265s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 cache delete minikube-local-cache-test:functional-042762
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-042762
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-042762 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (232.553848ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-042762 cache reload: (1.164847661s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 kubectl -- --context functional-042762 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-042762 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.51s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-042762 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0415 11:24:14.566542  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-042762 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.509477648s)
functional_test.go:757: restart took 38.509617241s for "functional-042762" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.51s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-042762 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-042762 logs: (1.41401546s)
--- PASS: TestFunctional/serial/LogsCmd (1.41s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 logs --file /tmp/TestFunctionalserialLogsFileCmd1903445097/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-042762 logs --file /tmp/TestFunctionalserialLogsFileCmd1903445097/001/logs.txt: (1.365475363s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.64s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-042762 apply -f testdata/invalidsvc.yaml
E0415 11:24:55.526994  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.crt: no such file or directory
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-042762
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-042762: exit status 115 (293.943828ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.175:32230 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-042762 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.64s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-042762 config get cpus: exit status 14 (74.053307ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-042762 config get cpus: exit status 14 (62.406245ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (31.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-042762 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-042762 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 370439: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (31.54s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-042762 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-042762 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (154.73592ms)

                                                
                                                
-- stdout --
	* [functional-042762] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18644-354432/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18644-354432/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 11:25:11.978050  370261 out.go:291] Setting OutFile to fd 1 ...
	I0415 11:25:11.978199  370261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:25:11.978212  370261 out.go:304] Setting ErrFile to fd 2...
	I0415 11:25:11.978218  370261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:25:11.978441  370261 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18644-354432/.minikube/bin
	I0415 11:25:11.978978  370261 out.go:298] Setting JSON to false
	I0415 11:25:11.980181  370261 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4055,"bootTime":1713176257,"procs":251,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0415 11:25:11.980249  370261 start.go:139] virtualization: kvm guest
	I0415 11:25:11.982385  370261 out.go:177] * [functional-042762] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0415 11:25:11.983844  370261 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 11:25:11.985133  370261 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 11:25:11.983870  370261 notify.go:220] Checking for updates...
	I0415 11:25:11.986582  370261 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18644-354432/kubeconfig
	I0415 11:25:11.987827  370261 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18644-354432/.minikube
	I0415 11:25:11.988998  370261 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0415 11:25:11.990211  370261 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 11:25:11.991977  370261 config.go:182] Loaded profile config "functional-042762": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0415 11:25:11.992371  370261 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:25:11.992438  370261 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:25:12.009058  370261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38965
	I0415 11:25:12.009526  370261 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:25:12.010272  370261 main.go:141] libmachine: Using API Version  1
	I0415 11:25:12.010305  370261 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:25:12.010712  370261 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:25:12.010922  370261 main.go:141] libmachine: (functional-042762) Calling .DriverName
	I0415 11:25:12.011234  370261 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 11:25:12.011539  370261 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:25:12.011596  370261 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:25:12.027826  370261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35833
	I0415 11:25:12.028233  370261 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:25:12.028738  370261 main.go:141] libmachine: Using API Version  1
	I0415 11:25:12.028768  370261 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:25:12.029139  370261 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:25:12.029379  370261 main.go:141] libmachine: (functional-042762) Calling .DriverName
	I0415 11:25:12.065219  370261 out.go:177] * Using the kvm2 driver based on existing profile
	I0415 11:25:12.066539  370261 start.go:297] selected driver: kvm2
	I0415 11:25:12.066555  370261 start.go:901] validating driver "kvm2" against &{Name:functional-042762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:functional-042762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.175 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 11:25:12.066676  370261 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 11:25:12.068842  370261 out.go:177] 
	W0415 11:25:12.070206  370261 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0415 11:25:12.071547  370261 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-042762 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-042762 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-042762 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (159.131019ms)

                                                
                                                
-- stdout --
	* [functional-042762] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18644-354432/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18644-354432/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 11:25:11.822375  370216 out.go:291] Setting OutFile to fd 1 ...
	I0415 11:25:11.822474  370216 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:25:11.822482  370216 out.go:304] Setting ErrFile to fd 2...
	I0415 11:25:11.822487  370216 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:25:11.822785  370216 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18644-354432/.minikube/bin
	I0415 11:25:11.823284  370216 out.go:298] Setting JSON to false
	I0415 11:25:11.824610  370216 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4055,"bootTime":1713176257,"procs":252,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0415 11:25:11.824696  370216 start.go:139] virtualization: kvm guest
	I0415 11:25:11.827601  370216 out.go:177] * [functional-042762] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	I0415 11:25:11.828997  370216 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 11:25:11.829009  370216 notify.go:220] Checking for updates...
	I0415 11:25:11.830437  370216 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 11:25:11.832149  370216 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18644-354432/kubeconfig
	I0415 11:25:11.833652  370216 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18644-354432/.minikube
	I0415 11:25:11.835925  370216 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0415 11:25:11.837516  370216 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 11:25:11.839568  370216 config.go:182] Loaded profile config "functional-042762": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0415 11:25:11.840216  370216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:25:11.840281  370216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:25:11.856263  370216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33685
	I0415 11:25:11.856699  370216 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:25:11.857358  370216 main.go:141] libmachine: Using API Version  1
	I0415 11:25:11.857389  370216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:25:11.857825  370216 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:25:11.858063  370216 main.go:141] libmachine: (functional-042762) Calling .DriverName
	I0415 11:25:11.858383  370216 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 11:25:11.858738  370216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:25:11.858811  370216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:25:11.874159  370216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34961
	I0415 11:25:11.874583  370216 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:25:11.875182  370216 main.go:141] libmachine: Using API Version  1
	I0415 11:25:11.875224  370216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:25:11.875619  370216 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:25:11.875846  370216 main.go:141] libmachine: (functional-042762) Calling .DriverName
	I0415 11:25:11.908819  370216 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0415 11:25:11.910200  370216 start.go:297] selected driver: kvm2
	I0415 11:25:11.910216  370216 start.go:901] validating driver "kvm2" against &{Name:functional-042762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18621/minikube-v1.33.0-1712854267-18621-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:functional-042762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.175 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 11:25:11.910364  370216 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 11:25:11.912781  370216 out.go:177] 
	W0415 11:25:11.914108  370216 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0415 11:25:11.915400  370216 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (24.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-042762 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-042762 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-pwm4c" [ef771140-ed26-4647-a903-479e6b27c176] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-pwm4c" [ef771140-ed26-4647-a903-479e6b27c176] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.005506007s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.175:30434
functional_test.go:1657: error fetching http://192.168.39.175:30434: Get "http://192.168.39.175:30434": dial tcp 192.168.39.175:30434: connect: connection refused
functional_test.go:1657: error fetching http://192.168.39.175:30434: Get "http://192.168.39.175:30434": dial tcp 192.168.39.175:30434: connect: connection refused
functional_test.go:1657: error fetching http://192.168.39.175:30434: Get "http://192.168.39.175:30434": dial tcp 192.168.39.175:30434: connect: connection refused
functional_test.go:1657: error fetching http://192.168.39.175:30434: Get "http://192.168.39.175:30434": dial tcp 192.168.39.175:30434: connect: connection refused
functional_test.go:1657: error fetching http://192.168.39.175:30434: Get "http://192.168.39.175:30434": dial tcp 192.168.39.175:30434: connect: connection refused
functional_test.go:1671: http://192.168.39.175:30434: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-pwm4c

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.175:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.175:30434
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (24.62s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (56.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [bd821827-9771-4b8e-9121-66d1ee52f6db] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004449893s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-042762 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-042762 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-042762 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-042762 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-042762 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-042762 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-042762 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-042762 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-042762 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [45e701e9-8422-439d-9213-c5525090514e] Pending
helpers_test.go:344: "sp-pod" [45e701e9-8422-439d-9213-c5525090514e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [45e701e9-8422-439d-9213-c5525090514e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004279404s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-042762 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-042762 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-042762 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [07164835-1530-4e5c-981b-8fb9bff22351] Pending
helpers_test.go:344: "sp-pod" [07164835-1530-4e5c-981b-8fb9bff22351] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [07164835-1530-4e5c-981b-8fb9bff22351] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.005361134s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-042762 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (56.15s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 ssh -n functional-042762 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 cp functional-042762:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1104420292/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 ssh -n functional-042762 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 ssh -n functional-042762 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-042762 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-hnldp" [ed198a41-6ce5-4baf-992a-546a18b48ef1] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-hnldp" [ed198a41-6ce5-4baf-992a-546a18b48ef1] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.005757772s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-042762 exec mysql-859648c796-hnldp -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-042762 exec mysql-859648c796-hnldp -- mysql -ppassword -e "show databases;": exit status 1 (397.216994ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-042762 exec mysql-859648c796-hnldp -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-042762 exec mysql-859648c796-hnldp -- mysql -ppassword -e "show databases;": exit status 1 (340.807402ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-042762 exec mysql-859648c796-hnldp -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-042762 exec mysql-859648c796-hnldp -- mysql -ppassword -e "show databases;": exit status 1 (189.838368ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-042762 exec mysql-859648c796-hnldp -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-042762 exec mysql-859648c796-hnldp -- mysql -ppassword -e "show databases;": exit status 1 (201.42327ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-042762 exec mysql-859648c796-hnldp -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.08s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/361829/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 ssh "sudo cat /etc/test/nested/copy/361829/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/361829.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 ssh "sudo cat /etc/ssl/certs/361829.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/361829.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 ssh "sudo cat /usr/share/ca-certificates/361829.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3618292.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 ssh "sudo cat /etc/ssl/certs/3618292.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/3618292.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 ssh "sudo cat /usr/share/ca-certificates/3618292.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-042762 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-042762 ssh "sudo systemctl is-active docker": exit status 1 (205.131117ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-042762 ssh "sudo systemctl is-active crio": exit status 1 (209.504362ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-042762 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-042762 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-bfqrt" [21a55d07-bd8b-47ad-bd58-7ccbd0f28320] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-bfqrt" [21a55d07-bd8b-47ad-bd58-7ccbd0f28320] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.005006242s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "245.244526ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "61.290886ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "258.393697ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "78.837397ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-042762 /tmp/TestFunctionalparallelMountCmdany-port2177397840/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1713180300991743374" to /tmp/TestFunctionalparallelMountCmdany-port2177397840/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1713180300991743374" to /tmp/TestFunctionalparallelMountCmdany-port2177397840/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1713180300991743374" to /tmp/TestFunctionalparallelMountCmdany-port2177397840/001/test-1713180300991743374
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-042762 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (243.614055ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 15 11:25 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 15 11:25 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 15 11:25 test-1713180300991743374
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 ssh cat /mount-9p/test-1713180300991743374
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-042762 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [dd68290a-81ea-4820-9843-3705ba84d1a5] Pending
helpers_test.go:344: "busybox-mount" [dd68290a-81ea-4820-9843-3705ba84d1a5] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [dd68290a-81ea-4820-9843-3705ba84d1a5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [dd68290a-81ea-4820-9843-3705ba84d1a5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004260635s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-042762 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-042762 /tmp/TestFunctionalparallelMountCmdany-port2177397840/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-042762 /tmp/TestFunctionalparallelMountCmdspecific-port2763783197/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-042762 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (209.418554ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-042762 /tmp/TestFunctionalparallelMountCmdspecific-port2763783197/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-042762 ssh "sudo umount -f /mount-9p": exit status 1 (203.65865ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-042762 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-042762 /tmp/TestFunctionalparallelMountCmdspecific-port2763783197/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-042762 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3970456489/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-042762 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3970456489/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-042762 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3970456489/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-042762 ssh "findmnt -T" /mount1: exit status 1 (227.047285ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-042762 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-042762 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3970456489/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-042762 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3970456489/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-042762 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3970456489/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 service list -o json
functional_test.go:1490: Took "269.073712ms" to run "out/minikube-linux-amd64 -p functional-042762 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.175:32131
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.175:32131
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-042762 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.29.3
registry.k8s.io/kube-proxy:v1.29.3
registry.k8s.io/kube-controller-manager:v1.29.3
registry.k8s.io/kube-apiserver:v1.29.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-042762
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-042762
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-042762 image ls --format short --alsologtostderr:
I0415 11:25:46.762775  371229 out.go:291] Setting OutFile to fd 1 ...
I0415 11:25:46.762957  371229 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 11:25:46.762971  371229 out.go:304] Setting ErrFile to fd 2...
I0415 11:25:46.762977  371229 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 11:25:46.763257  371229 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18644-354432/.minikube/bin
I0415 11:25:46.763938  371229 config.go:182] Loaded profile config "functional-042762": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0415 11:25:46.764038  371229 config.go:182] Loaded profile config "functional-042762": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0415 11:25:46.764431  371229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0415 11:25:46.764489  371229 main.go:141] libmachine: Launching plugin server for driver kvm2
I0415 11:25:46.780405  371229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43661
I0415 11:25:46.782900  371229 main.go:141] libmachine: () Calling .GetVersion
I0415 11:25:46.783595  371229 main.go:141] libmachine: Using API Version  1
I0415 11:25:46.783619  371229 main.go:141] libmachine: () Calling .SetConfigRaw
I0415 11:25:46.784022  371229 main.go:141] libmachine: () Calling .GetMachineName
I0415 11:25:46.784255  371229 main.go:141] libmachine: (functional-042762) Calling .GetState
I0415 11:25:46.786516  371229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0415 11:25:46.786564  371229 main.go:141] libmachine: Launching plugin server for driver kvm2
I0415 11:25:46.803396  371229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45855
I0415 11:25:46.803970  371229 main.go:141] libmachine: () Calling .GetVersion
I0415 11:25:46.804571  371229 main.go:141] libmachine: Using API Version  1
I0415 11:25:46.804595  371229 main.go:141] libmachine: () Calling .SetConfigRaw
I0415 11:25:46.804968  371229 main.go:141] libmachine: () Calling .GetMachineName
I0415 11:25:46.805161  371229 main.go:141] libmachine: (functional-042762) Calling .DriverName
I0415 11:25:46.805375  371229 ssh_runner.go:195] Run: systemctl --version
I0415 11:25:46.805408  371229 main.go:141] libmachine: (functional-042762) Calling .GetSSHHostname
I0415 11:25:46.809223  371229 main.go:141] libmachine: (functional-042762) DBG | domain functional-042762 has defined MAC address 52:54:00:ea:e9:9a in network mk-functional-042762
I0415 11:25:46.809598  371229 main.go:141] libmachine: (functional-042762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:9a", ip: ""} in network mk-functional-042762: {Iface:virbr1 ExpiryTime:2024-04-15 12:22:34 +0000 UTC Type:0 Mac:52:54:00:ea:e9:9a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:functional-042762 Clientid:01:52:54:00:ea:e9:9a}
I0415 11:25:46.809622  371229 main.go:141] libmachine: (functional-042762) DBG | domain functional-042762 has defined IP address 192.168.39.175 and MAC address 52:54:00:ea:e9:9a in network mk-functional-042762
I0415 11:25:46.809803  371229 main.go:141] libmachine: (functional-042762) Calling .GetSSHPort
I0415 11:25:46.809992  371229 main.go:141] libmachine: (functional-042762) Calling .GetSSHKeyPath
I0415 11:25:46.810138  371229 main.go:141] libmachine: (functional-042762) Calling .GetSSHUsername
I0415 11:25:46.810285  371229 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18644-354432/.minikube/machines/functional-042762/id_rsa Username:docker}
I0415 11:25:46.891620  371229 ssh_runner.go:195] Run: sudo crictl images --output json
I0415 11:25:46.953960  371229 main.go:141] libmachine: Making call to close driver server
I0415 11:25:46.953979  371229 main.go:141] libmachine: (functional-042762) Calling .Close
I0415 11:25:46.954290  371229 main.go:141] libmachine: Successfully made call to close driver server
I0415 11:25:46.954310  371229 main.go:141] libmachine: Making call to close connection to plugin binary
I0415 11:25:46.954320  371229 main.go:141] libmachine: Making call to close driver server
I0415 11:25:46.954333  371229 main.go:141] libmachine: (functional-042762) Calling .Close
I0415 11:25:46.954606  371229 main.go:141] libmachine: Successfully made call to close driver server
I0415 11:25:46.954621  371229 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-042762 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/kube-controller-manager     | v1.29.3            | sha256:6052a2 | 33.5MB |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| gcr.io/google-containers/addon-resizer      | functional-042762  | sha256:ffd4cf | 10.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| registry.k8s.io/kube-scheduler              | v1.29.3            | sha256:8c390d | 18.6MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| docker.io/library/minikube-local-cache-test | functional-042762  | sha256:8c2b8e | 993B   |
| registry.k8s.io/etcd                        | 3.5.12-0           | sha256:3861cf | 57.2MB |
| registry.k8s.io/kube-apiserver              | v1.29.3            | sha256:39f995 | 35.1MB |
| registry.k8s.io/kube-proxy                  | v1.29.3            | sha256:a1d263 | 28.4MB |
| docker.io/kindest/kindnetd                  | v20240202-8f1494ea | sha256:4950bb | 27.8MB |
| docker.io/library/mysql                     | 5.7                | sha256:510733 | 138MB  |
| docker.io/library/nginx                     | latest             | sha256:c613f1 | 70.5MB |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:cbb01a | 18.2MB |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-042762 image ls --format table --alsologtostderr:
I0415 11:25:47.033846  371317 out.go:291] Setting OutFile to fd 1 ...
I0415 11:25:47.033969  371317 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 11:25:47.033980  371317 out.go:304] Setting ErrFile to fd 2...
I0415 11:25:47.033985  371317 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 11:25:47.034175  371317 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18644-354432/.minikube/bin
I0415 11:25:47.034752  371317 config.go:182] Loaded profile config "functional-042762": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0415 11:25:47.034841  371317 config.go:182] Loaded profile config "functional-042762": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0415 11:25:47.035229  371317 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0415 11:25:47.035277  371317 main.go:141] libmachine: Launching plugin server for driver kvm2
I0415 11:25:47.051928  371317 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35003
I0415 11:25:47.052737  371317 main.go:141] libmachine: () Calling .GetVersion
I0415 11:25:47.053494  371317 main.go:141] libmachine: Using API Version  1
I0415 11:25:47.053514  371317 main.go:141] libmachine: () Calling .SetConfigRaw
I0415 11:25:47.053956  371317 main.go:141] libmachine: () Calling .GetMachineName
I0415 11:25:47.054143  371317 main.go:141] libmachine: (functional-042762) Calling .GetState
I0415 11:25:47.055676  371317 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0415 11:25:47.055720  371317 main.go:141] libmachine: Launching plugin server for driver kvm2
I0415 11:25:47.071566  371317 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42697
I0415 11:25:47.072030  371317 main.go:141] libmachine: () Calling .GetVersion
I0415 11:25:47.072568  371317 main.go:141] libmachine: Using API Version  1
I0415 11:25:47.072614  371317 main.go:141] libmachine: () Calling .SetConfigRaw
I0415 11:25:47.072962  371317 main.go:141] libmachine: () Calling .GetMachineName
I0415 11:25:47.073170  371317 main.go:141] libmachine: (functional-042762) Calling .DriverName
I0415 11:25:47.073461  371317 ssh_runner.go:195] Run: systemctl --version
I0415 11:25:47.073485  371317 main.go:141] libmachine: (functional-042762) Calling .GetSSHHostname
I0415 11:25:47.077175  371317 main.go:141] libmachine: (functional-042762) DBG | domain functional-042762 has defined MAC address 52:54:00:ea:e9:9a in network mk-functional-042762
I0415 11:25:47.077588  371317 main.go:141] libmachine: (functional-042762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:9a", ip: ""} in network mk-functional-042762: {Iface:virbr1 ExpiryTime:2024-04-15 12:22:34 +0000 UTC Type:0 Mac:52:54:00:ea:e9:9a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:functional-042762 Clientid:01:52:54:00:ea:e9:9a}
I0415 11:25:47.077624  371317 main.go:141] libmachine: (functional-042762) DBG | domain functional-042762 has defined IP address 192.168.39.175 and MAC address 52:54:00:ea:e9:9a in network mk-functional-042762
I0415 11:25:47.077770  371317 main.go:141] libmachine: (functional-042762) Calling .GetSSHPort
I0415 11:25:47.078005  371317 main.go:141] libmachine: (functional-042762) Calling .GetSSHKeyPath
I0415 11:25:47.078238  371317 main.go:141] libmachine: (functional-042762) Calling .GetSSHUsername
I0415 11:25:47.078414  371317 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18644-354432/.minikube/machines/functional-042762/id_rsa Username:docker}
I0415 11:25:47.165501  371317 ssh_runner.go:195] Run: sudo crictl images --output json
I0415 11:25:47.242324  371317 main.go:141] libmachine: Making call to close driver server
I0415 11:25:47.242342  371317 main.go:141] libmachine: (functional-042762) Calling .Close
I0415 11:25:47.242617  371317 main.go:141] libmachine: Successfully made call to close driver server
I0415 11:25:47.242636  371317 main.go:141] libmachine: Making call to close connection to plugin binary
I0415 11:25:47.242644  371317 main.go:141] libmachine: Making call to close driver server
I0415 11:25:47.242642  371317 main.go:141] libmachine: (functional-042762) DBG | Closing plugin on server side
I0415 11:25:47.242652  371317 main.go:141] libmachine: (functional-042762) Calling .Close
I0415 11:25:47.242882  371317 main.go:141] libmachine: (functional-042762) DBG | Closing plugin on server side
I0415 11:25:47.242943  371317 main.go:141] libmachine: Successfully made call to close driver server
I0415 11:25:47.242979  371317 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-042762 image ls --format json --alsologtostderr:
[{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:c613f16b664244b150d1c3644cbc387ec1fe8376377f9419992280eb4a82ff3b","repoDigests":["docker.io/library/nginx@sha256:9ff236ed47fe39cf1f0acf349d0e5137f8b8a6fd0b46e5117a401010e56222e1"],"r
epoTags":["docker.io/library/nginx:latest"],"size":"70542235"},{"id":"sha256:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.29.3"],"size":"33466661"},{"id":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"57236178"},{"id":"sha256:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392","repoDigests":["registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"],"repoTags":["registry.k8s.io/kube-proxy:v1.29.3"],"size":"28398741"},{"id":"sha256:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6d
cc4d39b31857b868aaab80521432ddcc216b805c4f7988"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"27755257"},{"id":"sha256:8c2b8e69efd1acd2d4867b22519951f9dfb4e51835deb91cccd7fa1693d7a4cb","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-042762"],"size":"993"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533","repoDigests":["registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.29.3"],"size":"35100536"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8
dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a"],"repoTags":["registry.k8s.io/kube-scheduler:v1.29.3"],"size":"18553260"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:ff
d4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-042762"],"size":"10823156"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"18182961"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-042762 image ls --format json --alsologtostderr:
I0415 11:25:47.032010  371311 out.go:291] Setting OutFile to fd 1 ...
I0415 11:25:47.032214  371311 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 11:25:47.032228  371311 out.go:304] Setting ErrFile to fd 2...
I0415 11:25:47.032236  371311 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 11:25:47.032703  371311 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18644-354432/.minikube/bin
I0415 11:25:47.033313  371311 config.go:182] Loaded profile config "functional-042762": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0415 11:25:47.033429  371311 config.go:182] Loaded profile config "functional-042762": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0415 11:25:47.033811  371311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0415 11:25:47.033874  371311 main.go:141] libmachine: Launching plugin server for driver kvm2
I0415 11:25:47.050408  371311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41481
I0415 11:25:47.050844  371311 main.go:141] libmachine: () Calling .GetVersion
I0415 11:25:47.051486  371311 main.go:141] libmachine: Using API Version  1
I0415 11:25:47.051516  371311 main.go:141] libmachine: () Calling .SetConfigRaw
I0415 11:25:47.051862  371311 main.go:141] libmachine: () Calling .GetMachineName
I0415 11:25:47.052073  371311 main.go:141] libmachine: (functional-042762) Calling .GetState
I0415 11:25:47.054133  371311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0415 11:25:47.054179  371311 main.go:141] libmachine: Launching plugin server for driver kvm2
I0415 11:25:47.071602  371311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44197
I0415 11:25:47.072056  371311 main.go:141] libmachine: () Calling .GetVersion
I0415 11:25:47.072550  371311 main.go:141] libmachine: Using API Version  1
I0415 11:25:47.072570  371311 main.go:141] libmachine: () Calling .SetConfigRaw
I0415 11:25:47.072971  371311 main.go:141] libmachine: () Calling .GetMachineName
I0415 11:25:47.073293  371311 main.go:141] libmachine: (functional-042762) Calling .DriverName
I0415 11:25:47.073735  371311 ssh_runner.go:195] Run: systemctl --version
I0415 11:25:47.073781  371311 main.go:141] libmachine: (functional-042762) Calling .GetSSHHostname
I0415 11:25:47.077916  371311 main.go:141] libmachine: (functional-042762) DBG | domain functional-042762 has defined MAC address 52:54:00:ea:e9:9a in network mk-functional-042762
I0415 11:25:47.078806  371311 main.go:141] libmachine: (functional-042762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:9a", ip: ""} in network mk-functional-042762: {Iface:virbr1 ExpiryTime:2024-04-15 12:22:34 +0000 UTC Type:0 Mac:52:54:00:ea:e9:9a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:functional-042762 Clientid:01:52:54:00:ea:e9:9a}
I0415 11:25:47.078856  371311 main.go:141] libmachine: (functional-042762) DBG | domain functional-042762 has defined IP address 192.168.39.175 and MAC address 52:54:00:ea:e9:9a in network mk-functional-042762
I0415 11:25:47.078966  371311 main.go:141] libmachine: (functional-042762) Calling .GetSSHPort
I0415 11:25:47.079128  371311 main.go:141] libmachine: (functional-042762) Calling .GetSSHKeyPath
I0415 11:25:47.079343  371311 main.go:141] libmachine: (functional-042762) Calling .GetSSHUsername
I0415 11:25:47.079609  371311 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18644-354432/.minikube/machines/functional-042762/id_rsa Username:docker}
I0415 11:25:47.159070  371311 ssh_runner.go:195] Run: sudo crictl images --output json
I0415 11:25:47.241773  371311 main.go:141] libmachine: Making call to close driver server
I0415 11:25:47.241792  371311 main.go:141] libmachine: (functional-042762) Calling .Close
I0415 11:25:47.242111  371311 main.go:141] libmachine: (functional-042762) DBG | Closing plugin on server side
I0415 11:25:47.242119  371311 main.go:141] libmachine: Successfully made call to close driver server
I0415 11:25:47.242140  371311 main.go:141] libmachine: Making call to close connection to plugin binary
I0415 11:25:47.242150  371311 main.go:141] libmachine: Making call to close driver server
I0415 11:25:47.242157  371311 main.go:141] libmachine: (functional-042762) Calling .Close
I0415 11:25:47.245157  371311 main.go:141] libmachine: (functional-042762) DBG | Closing plugin on server side
I0415 11:25:47.245161  371311 main.go:141] libmachine: Successfully made call to close driver server
I0415 11:25:47.245179  371311 main.go:141] libmachine: Making call to close connection to plugin binary
W0415 11:25:47.247448  371311 root.go:91] failed to log command end to audit: failed to find a log row with id equals to 6c0d3ef3-62d4-461d-a845-587a1b9bd23d
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-042762 image ls --format yaml --alsologtostderr:
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "18182961"
- id: sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "57236178"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:8c2b8e69efd1acd2d4867b22519951f9dfb4e51835deb91cccd7fa1693d7a4cb
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-042762
size: "993"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104
repoTags:
- registry.k8s.io/kube-controller-manager:v1.29.3
size: "33466661"
- id: sha256:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392
repoDigests:
- registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863
repoTags:
- registry.k8s.io/kube-proxy:v1.29.3
size: "28398741"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "27755257"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:c613f16b664244b150d1c3644cbc387ec1fe8376377f9419992280eb4a82ff3b
repoDigests:
- docker.io/library/nginx@sha256:9ff236ed47fe39cf1f0acf349d0e5137f8b8a6fd0b46e5117a401010e56222e1
repoTags:
- docker.io/library/nginx:latest
size: "70542235"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-042762
size: "10823156"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c
repoTags:
- registry.k8s.io/kube-apiserver:v1.29.3
size: "35100536"
- id: sha256:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a
repoTags:
- registry.k8s.io/kube-scheduler:v1.29.3
size: "18553260"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-042762 image ls --format yaml --alsologtostderr:
I0415 11:25:46.773643  371230 out.go:291] Setting OutFile to fd 1 ...
I0415 11:25:46.773780  371230 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 11:25:46.773787  371230 out.go:304] Setting ErrFile to fd 2...
I0415 11:25:46.773791  371230 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 11:25:46.773988  371230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18644-354432/.minikube/bin
I0415 11:25:46.774591  371230 config.go:182] Loaded profile config "functional-042762": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0415 11:25:46.774704  371230 config.go:182] Loaded profile config "functional-042762": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0415 11:25:46.775066  371230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0415 11:25:46.775117  371230 main.go:141] libmachine: Launching plugin server for driver kvm2
I0415 11:25:46.790463  371230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38857
I0415 11:25:46.790891  371230 main.go:141] libmachine: () Calling .GetVersion
I0415 11:25:46.791677  371230 main.go:141] libmachine: Using API Version  1
I0415 11:25:46.791705  371230 main.go:141] libmachine: () Calling .SetConfigRaw
I0415 11:25:46.792107  371230 main.go:141] libmachine: () Calling .GetMachineName
I0415 11:25:46.792361  371230 main.go:141] libmachine: (functional-042762) Calling .GetState
I0415 11:25:46.794030  371230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0415 11:25:46.794067  371230 main.go:141] libmachine: Launching plugin server for driver kvm2
I0415 11:25:46.810444  371230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43051
I0415 11:25:46.810814  371230 main.go:141] libmachine: () Calling .GetVersion
I0415 11:25:46.811299  371230 main.go:141] libmachine: Using API Version  1
I0415 11:25:46.811331  371230 main.go:141] libmachine: () Calling .SetConfigRaw
I0415 11:25:46.811802  371230 main.go:141] libmachine: () Calling .GetMachineName
I0415 11:25:46.812134  371230 main.go:141] libmachine: (functional-042762) Calling .DriverName
I0415 11:25:46.812375  371230 ssh_runner.go:195] Run: systemctl --version
I0415 11:25:46.812417  371230 main.go:141] libmachine: (functional-042762) Calling .GetSSHHostname
I0415 11:25:46.815104  371230 main.go:141] libmachine: (functional-042762) DBG | domain functional-042762 has defined MAC address 52:54:00:ea:e9:9a in network mk-functional-042762
I0415 11:25:46.815740  371230 main.go:141] libmachine: (functional-042762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:9a", ip: ""} in network mk-functional-042762: {Iface:virbr1 ExpiryTime:2024-04-15 12:22:34 +0000 UTC Type:0 Mac:52:54:00:ea:e9:9a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:functional-042762 Clientid:01:52:54:00:ea:e9:9a}
I0415 11:25:46.815783  371230 main.go:141] libmachine: (functional-042762) DBG | domain functional-042762 has defined IP address 192.168.39.175 and MAC address 52:54:00:ea:e9:9a in network mk-functional-042762
I0415 11:25:46.815843  371230 main.go:141] libmachine: (functional-042762) Calling .GetSSHPort
I0415 11:25:46.816053  371230 main.go:141] libmachine: (functional-042762) Calling .GetSSHKeyPath
I0415 11:25:46.816251  371230 main.go:141] libmachine: (functional-042762) Calling .GetSSHUsername
I0415 11:25:46.816408  371230 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18644-354432/.minikube/machines/functional-042762/id_rsa Username:docker}
I0415 11:25:46.895103  371230 ssh_runner.go:195] Run: sudo crictl images --output json
I0415 11:25:46.961290  371230 main.go:141] libmachine: Making call to close driver server
I0415 11:25:46.961309  371230 main.go:141] libmachine: (functional-042762) Calling .Close
I0415 11:25:46.961596  371230 main.go:141] libmachine: Successfully made call to close driver server
I0415 11:25:46.961625  371230 main.go:141] libmachine: Making call to close connection to plugin binary
I0415 11:25:46.961631  371230 main.go:141] libmachine: (functional-042762) DBG | Closing plugin on server side
I0415 11:25:46.961643  371230 main.go:141] libmachine: Making call to close driver server
I0415 11:25:46.961661  371230 main.go:141] libmachine: (functional-042762) Calling .Close
I0415 11:25:46.961931  371230 main.go:141] libmachine: Successfully made call to close driver server
I0415 11:25:46.961945  371230 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-042762 ssh pgrep buildkitd: exit status 1 (251.965879ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 image build -t localhost/my-image:functional-042762 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-042762 image build -t localhost/my-image:functional-042762 testdata/build --alsologtostderr: (3.656200184s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-042762 image build -t localhost/my-image:functional-042762 testdata/build --alsologtostderr:
I0415 11:25:47.017210  371304 out.go:291] Setting OutFile to fd 1 ...
I0415 11:25:47.017463  371304 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 11:25:47.017586  371304 out.go:304] Setting ErrFile to fd 2...
I0415 11:25:47.017604  371304 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 11:25:47.017930  371304 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18644-354432/.minikube/bin
I0415 11:25:47.018820  371304 config.go:182] Loaded profile config "functional-042762": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0415 11:25:47.019467  371304 config.go:182] Loaded profile config "functional-042762": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0415 11:25:47.019872  371304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0415 11:25:47.019947  371304 main.go:141] libmachine: Launching plugin server for driver kvm2
I0415 11:25:47.037355  371304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40917
I0415 11:25:47.037864  371304 main.go:141] libmachine: () Calling .GetVersion
I0415 11:25:47.038425  371304 main.go:141] libmachine: Using API Version  1
I0415 11:25:47.038444  371304 main.go:141] libmachine: () Calling .SetConfigRaw
I0415 11:25:47.038807  371304 main.go:141] libmachine: () Calling .GetMachineName
I0415 11:25:47.039057  371304 main.go:141] libmachine: (functional-042762) Calling .GetState
I0415 11:25:47.041265  371304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0415 11:25:47.041316  371304 main.go:141] libmachine: Launching plugin server for driver kvm2
I0415 11:25:47.056025  371304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40109
I0415 11:25:47.056425  371304 main.go:141] libmachine: () Calling .GetVersion
I0415 11:25:47.056976  371304 main.go:141] libmachine: Using API Version  1
I0415 11:25:47.056995  371304 main.go:141] libmachine: () Calling .SetConfigRaw
I0415 11:25:47.057320  371304 main.go:141] libmachine: () Calling .GetMachineName
I0415 11:25:47.057528  371304 main.go:141] libmachine: (functional-042762) Calling .DriverName
I0415 11:25:47.057723  371304 ssh_runner.go:195] Run: systemctl --version
I0415 11:25:47.057748  371304 main.go:141] libmachine: (functional-042762) Calling .GetSSHHostname
I0415 11:25:47.060434  371304 main.go:141] libmachine: (functional-042762) DBG | domain functional-042762 has defined MAC address 52:54:00:ea:e9:9a in network mk-functional-042762
I0415 11:25:47.060848  371304 main.go:141] libmachine: (functional-042762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e9:9a", ip: ""} in network mk-functional-042762: {Iface:virbr1 ExpiryTime:2024-04-15 12:22:34 +0000 UTC Type:0 Mac:52:54:00:ea:e9:9a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:functional-042762 Clientid:01:52:54:00:ea:e9:9a}
I0415 11:25:47.060882  371304 main.go:141] libmachine: (functional-042762) DBG | domain functional-042762 has defined IP address 192.168.39.175 and MAC address 52:54:00:ea:e9:9a in network mk-functional-042762
I0415 11:25:47.060999  371304 main.go:141] libmachine: (functional-042762) Calling .GetSSHPort
I0415 11:25:47.061173  371304 main.go:141] libmachine: (functional-042762) Calling .GetSSHKeyPath
I0415 11:25:47.061352  371304 main.go:141] libmachine: (functional-042762) Calling .GetSSHUsername
I0415 11:25:47.061718  371304 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18644-354432/.minikube/machines/functional-042762/id_rsa Username:docker}
I0415 11:25:47.147026  371304 build_images.go:161] Building image from path: /tmp/build.2559585971.tar
I0415 11:25:47.147111  371304 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0415 11:25:47.161895  371304 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2559585971.tar
I0415 11:25:47.170128  371304 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2559585971.tar: stat -c "%s %y" /var/lib/minikube/build/build.2559585971.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2559585971.tar': No such file or directory
I0415 11:25:47.170196  371304 ssh_runner.go:362] scp /tmp/build.2559585971.tar --> /var/lib/minikube/build/build.2559585971.tar (3072 bytes)
I0415 11:25:47.245090  371304 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2559585971
I0415 11:25:47.260272  371304 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2559585971 -xf /var/lib/minikube/build/build.2559585971.tar
I0415 11:25:47.271734  371304 containerd.go:394] Building image: /var/lib/minikube/build/build.2559585971
I0415 11:25:47.271863  371304 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2559585971 --local dockerfile=/var/lib/minikube/build/build.2559585971 --output type=image,name=localhost/my-image:functional-042762
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile:
#1 transferring dockerfile: 97B done
#1 DONE 0.2s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.7s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.9s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:0bf312439c662daee19cc9ba59d65f573abac6bfe61c8b7a842d9fffca4511ac
#8 exporting manifest sha256:0bf312439c662daee19cc9ba59d65f573abac6bfe61c8b7a842d9fffca4511ac 0.0s done
#8 exporting config sha256:4abf6087d17cae1313ed020045bbf9e13d74ba1c51ef52d9d0ebc8b1c14cc5f1 0.0s done
#8 naming to localhost/my-image:functional-042762 done
#8 DONE 0.2s
I0415 11:25:50.564657  371304 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2559585971 --local dockerfile=/var/lib/minikube/build/build.2559585971 --output type=image,name=localhost/my-image:functional-042762: (3.292748577s)
I0415 11:25:50.564752  371304 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2559585971
I0415 11:25:50.586318  371304 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2559585971.tar
I0415 11:25:50.598293  371304 build_images.go:217] Built localhost/my-image:functional-042762 from /tmp/build.2559585971.tar
I0415 11:25:50.598339  371304 build_images.go:133] succeeded building to: functional-042762
I0415 11:25:50.598346  371304 build_images.go:134] failed building to: 
I0415 11:25:50.598377  371304 main.go:141] libmachine: Making call to close driver server
I0415 11:25:50.598393  371304 main.go:141] libmachine: (functional-042762) Calling .Close
I0415 11:25:50.598765  371304 main.go:141] libmachine: Successfully made call to close driver server
I0415 11:25:50.598775  371304 main.go:141] libmachine: (functional-042762) DBG | Closing plugin on server side
I0415 11:25:50.598794  371304 main.go:141] libmachine: Making call to close connection to plugin binary
I0415 11:25:50.598807  371304 main.go:141] libmachine: Making call to close driver server
I0415 11:25:50.598815  371304 main.go:141] libmachine: (functional-042762) Calling .Close
I0415 11:25:50.599075  371304 main.go:141] libmachine: Successfully made call to close driver server
I0415 11:25:50.599093  371304 main.go:141] libmachine: Making call to close connection to plugin binary
I0415 11:25:50.599117  371304 main.go:141] libmachine: (functional-042762) DBG | Closing plugin on server side
W0415 11:25:50.600442  371304 root.go:91] failed to log command end to audit: failed to find a log row with id equals to 096c507f-2c3f-4df9-bee8-871ae19c9383
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.038416497s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-042762
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 image load --daemon gcr.io/google-containers/addon-resizer:functional-042762 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-042762 image load --daemon gcr.io/google-containers/addon-resizer:functional-042762 --alsologtostderr: (5.15609612s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 image load --daemon gcr.io/google-containers/addon-resizer:functional-042762 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-042762 image load --daemon gcr.io/google-containers/addon-resizer:functional-042762 --alsologtostderr: (3.04968236s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.88120048s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-042762
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 image load --daemon gcr.io/google-containers/addon-resizer:functional-042762 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-042762 image load --daemon gcr.io/google-containers/addon-resizer:functional-042762 --alsologtostderr: (4.74981478s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.88s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 image save gcr.io/google-containers/addon-resizer:functional-042762 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
2024/04/15 11:25:43 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-042762 image save gcr.io/google-containers/addon-resizer:functional-042762 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.063661505s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 image rm gcr.io/google-containers/addon-resizer:functional-042762 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-042762 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.313303221s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-042762
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-042762 image save --daemon gcr.io/google-containers/addon-resizer:functional-042762 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-042762 image save --daemon gcr.io/google-containers/addon-resizer:functional-042762 --alsologtostderr: (1.056661537s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-042762
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.09s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-042762
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-042762
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-042762
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (213.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-676550 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0415 11:26:17.448182  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.crt: no such file or directory
E0415 11:28:33.602566  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.crt: no such file or directory
E0415 11:29:01.288560  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-676550 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (3m32.96541204s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (213.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-676550 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-676550 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-676550 -- rollout status deployment/busybox: (4.391186045s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-676550 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-676550 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-676550 -- exec busybox-7fdf7869d9-8ktjn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-676550 -- exec busybox-7fdf7869d9-fdl7n -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-676550 -- exec busybox-7fdf7869d9-rwg7x -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-676550 -- exec busybox-7fdf7869d9-8ktjn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-676550 -- exec busybox-7fdf7869d9-fdl7n -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-676550 -- exec busybox-7fdf7869d9-rwg7x -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-676550 -- exec busybox-7fdf7869d9-8ktjn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-676550 -- exec busybox-7fdf7869d9-fdl7n -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-676550 -- exec busybox-7fdf7869d9-rwg7x -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-676550 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-676550 -- exec busybox-7fdf7869d9-8ktjn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-676550 -- exec busybox-7fdf7869d9-8ktjn -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-676550 -- exec busybox-7fdf7869d9-fdl7n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-676550 -- exec busybox-7fdf7869d9-fdl7n -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-676550 -- exec busybox-7fdf7869d9-rwg7x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-676550 -- exec busybox-7fdf7869d9-rwg7x -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (46.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-676550 -v=7 --alsologtostderr
E0415 11:29:59.235175  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/functional-042762/client.crt: no such file or directory
E0415 11:29:59.240517  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/functional-042762/client.crt: no such file or directory
E0415 11:29:59.250842  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/functional-042762/client.crt: no such file or directory
E0415 11:29:59.271249  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/functional-042762/client.crt: no such file or directory
E0415 11:29:59.311617  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/functional-042762/client.crt: no such file or directory
E0415 11:29:59.392005  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/functional-042762/client.crt: no such file or directory
E0415 11:29:59.552473  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/functional-042762/client.crt: no such file or directory
E0415 11:29:59.872863  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/functional-042762/client.crt: no such file or directory
E0415 11:30:00.513843  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/functional-042762/client.crt: no such file or directory
E0415 11:30:01.794274  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/functional-042762/client.crt: no such file or directory
E0415 11:30:04.355183  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/functional-042762/client.crt: no such file or directory
E0415 11:30:09.476248  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/functional-042762/client.crt: no such file or directory
E0415 11:30:19.716745  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/functional-042762/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-676550 -v=7 --alsologtostderr: (45.504336436s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (46.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-676550 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (14.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 cp testdata/cp-test.txt ha-676550:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 ssh -n ha-676550 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 cp ha-676550:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1026847236/001/cp-test_ha-676550.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 ssh -n ha-676550 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 cp ha-676550:/home/docker/cp-test.txt ha-676550-m02:/home/docker/cp-test_ha-676550_ha-676550-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 ssh -n ha-676550 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 ssh -n ha-676550-m02 "sudo cat /home/docker/cp-test_ha-676550_ha-676550-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 cp ha-676550:/home/docker/cp-test.txt ha-676550-m03:/home/docker/cp-test_ha-676550_ha-676550-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 ssh -n ha-676550 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 ssh -n ha-676550-m03 "sudo cat /home/docker/cp-test_ha-676550_ha-676550-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 cp ha-676550:/home/docker/cp-test.txt ha-676550-m04:/home/docker/cp-test_ha-676550_ha-676550-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 ssh -n ha-676550 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 ssh -n ha-676550-m04 "sudo cat /home/docker/cp-test_ha-676550_ha-676550-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 cp testdata/cp-test.txt ha-676550-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 ssh -n ha-676550-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 cp ha-676550-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1026847236/001/cp-test_ha-676550-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 ssh -n ha-676550-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 cp ha-676550-m02:/home/docker/cp-test.txt ha-676550:/home/docker/cp-test_ha-676550-m02_ha-676550.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 ssh -n ha-676550-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 ssh -n ha-676550 "sudo cat /home/docker/cp-test_ha-676550-m02_ha-676550.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 cp ha-676550-m02:/home/docker/cp-test.txt ha-676550-m03:/home/docker/cp-test_ha-676550-m02_ha-676550-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 ssh -n ha-676550-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 ssh -n ha-676550-m03 "sudo cat /home/docker/cp-test_ha-676550-m02_ha-676550-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 cp ha-676550-m02:/home/docker/cp-test.txt ha-676550-m04:/home/docker/cp-test_ha-676550-m02_ha-676550-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 ssh -n ha-676550-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 ssh -n ha-676550-m04 "sudo cat /home/docker/cp-test_ha-676550-m02_ha-676550-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 cp testdata/cp-test.txt ha-676550-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 ssh -n ha-676550-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 cp ha-676550-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1026847236/001/cp-test_ha-676550-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 ssh -n ha-676550-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 cp ha-676550-m03:/home/docker/cp-test.txt ha-676550:/home/docker/cp-test_ha-676550-m03_ha-676550.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 ssh -n ha-676550-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 ssh -n ha-676550 "sudo cat /home/docker/cp-test_ha-676550-m03_ha-676550.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 cp ha-676550-m03:/home/docker/cp-test.txt ha-676550-m02:/home/docker/cp-test_ha-676550-m03_ha-676550-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 ssh -n ha-676550-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 ssh -n ha-676550-m02 "sudo cat /home/docker/cp-test_ha-676550-m03_ha-676550-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 cp ha-676550-m03:/home/docker/cp-test.txt ha-676550-m04:/home/docker/cp-test_ha-676550-m03_ha-676550-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 ssh -n ha-676550-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 ssh -n ha-676550-m04 "sudo cat /home/docker/cp-test_ha-676550-m03_ha-676550-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 cp testdata/cp-test.txt ha-676550-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 ssh -n ha-676550-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 cp ha-676550-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1026847236/001/cp-test_ha-676550-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 ssh -n ha-676550-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 cp ha-676550-m04:/home/docker/cp-test.txt ha-676550:/home/docker/cp-test_ha-676550-m04_ha-676550.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 ssh -n ha-676550-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 ssh -n ha-676550 "sudo cat /home/docker/cp-test_ha-676550-m04_ha-676550.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 cp ha-676550-m04:/home/docker/cp-test.txt ha-676550-m02:/home/docker/cp-test_ha-676550-m04_ha-676550-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 ssh -n ha-676550-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 ssh -n ha-676550-m02 "sudo cat /home/docker/cp-test_ha-676550-m04_ha-676550-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 cp ha-676550-m04:/home/docker/cp-test.txt ha-676550-m03:/home/docker/cp-test_ha-676550-m04_ha-676550-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 ssh -n ha-676550-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 ssh -n ha-676550-m03 "sudo cat /home/docker/cp-test_ha-676550-m04_ha-676550-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (14.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (92.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 node stop m02 -v=7 --alsologtostderr
E0415 11:30:40.197450  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/functional-042762/client.crt: no such file or directory
E0415 11:31:21.158709  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/functional-042762/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-676550 node stop m02 -v=7 --alsologtostderr: (1m31.787325074s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-676550 status -v=7 --alsologtostderr: exit status 7 (709.026844ms)

                                                
                                                
-- stdout --
	ha-676550
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-676550-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-676550-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-676550-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 11:32:11.617007  375955 out.go:291] Setting OutFile to fd 1 ...
	I0415 11:32:11.617153  375955 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:32:11.617165  375955 out.go:304] Setting ErrFile to fd 2...
	I0415 11:32:11.617172  375955 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:32:11.617384  375955 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18644-354432/.minikube/bin
	I0415 11:32:11.617587  375955 out.go:298] Setting JSON to false
	I0415 11:32:11.617625  375955 mustload.go:65] Loading cluster: ha-676550
	I0415 11:32:11.617728  375955 notify.go:220] Checking for updates...
	I0415 11:32:11.618821  375955 config.go:182] Loaded profile config "ha-676550": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0415 11:32:11.618899  375955 status.go:255] checking status of ha-676550 ...
	I0415 11:32:11.620180  375955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:32:11.620270  375955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:32:11.636431  375955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35241
	I0415 11:32:11.636966  375955 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:32:11.637657  375955 main.go:141] libmachine: Using API Version  1
	I0415 11:32:11.637675  375955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:32:11.638054  375955 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:32:11.638269  375955 main.go:141] libmachine: (ha-676550) Calling .GetState
	I0415 11:32:11.640523  375955 status.go:330] ha-676550 host status = "Running" (err=<nil>)
	I0415 11:32:11.640549  375955 host.go:66] Checking if "ha-676550" exists ...
	I0415 11:32:11.640880  375955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:32:11.640923  375955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:32:11.657592  375955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45195
	I0415 11:32:11.658094  375955 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:32:11.658688  375955 main.go:141] libmachine: Using API Version  1
	I0415 11:32:11.658741  375955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:32:11.659171  375955 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:32:11.659413  375955 main.go:141] libmachine: (ha-676550) Calling .GetIP
	I0415 11:32:11.662368  375955 main.go:141] libmachine: (ha-676550) DBG | domain ha-676550 has defined MAC address 52:54:00:b4:8e:0c in network mk-ha-676550
	I0415 11:32:11.662819  375955 main.go:141] libmachine: (ha-676550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8e:0c", ip: ""} in network mk-ha-676550: {Iface:virbr1 ExpiryTime:2024-04-15 12:26:11 +0000 UTC Type:0 Mac:52:54:00:b4:8e:0c Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-676550 Clientid:01:52:54:00:b4:8e:0c}
	I0415 11:32:11.662853  375955 main.go:141] libmachine: (ha-676550) DBG | domain ha-676550 has defined IP address 192.168.39.26 and MAC address 52:54:00:b4:8e:0c in network mk-ha-676550
	I0415 11:32:11.663008  375955 host.go:66] Checking if "ha-676550" exists ...
	I0415 11:32:11.663356  375955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:32:11.663399  375955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:32:11.679842  375955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34225
	I0415 11:32:11.680407  375955 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:32:11.680980  375955 main.go:141] libmachine: Using API Version  1
	I0415 11:32:11.681008  375955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:32:11.681442  375955 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:32:11.681686  375955 main.go:141] libmachine: (ha-676550) Calling .DriverName
	I0415 11:32:11.681903  375955 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 11:32:11.681940  375955 main.go:141] libmachine: (ha-676550) Calling .GetSSHHostname
	I0415 11:32:11.685054  375955 main.go:141] libmachine: (ha-676550) DBG | domain ha-676550 has defined MAC address 52:54:00:b4:8e:0c in network mk-ha-676550
	I0415 11:32:11.685714  375955 main.go:141] libmachine: (ha-676550) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:8e:0c", ip: ""} in network mk-ha-676550: {Iface:virbr1 ExpiryTime:2024-04-15 12:26:11 +0000 UTC Type:0 Mac:52:54:00:b4:8e:0c Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-676550 Clientid:01:52:54:00:b4:8e:0c}
	I0415 11:32:11.685757  375955 main.go:141] libmachine: (ha-676550) DBG | domain ha-676550 has defined IP address 192.168.39.26 and MAC address 52:54:00:b4:8e:0c in network mk-ha-676550
	I0415 11:32:11.686077  375955 main.go:141] libmachine: (ha-676550) Calling .GetSSHPort
	I0415 11:32:11.686277  375955 main.go:141] libmachine: (ha-676550) Calling .GetSSHKeyPath
	I0415 11:32:11.686515  375955 main.go:141] libmachine: (ha-676550) Calling .GetSSHUsername
	I0415 11:32:11.686699  375955 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18644-354432/.minikube/machines/ha-676550/id_rsa Username:docker}
	I0415 11:32:11.778166  375955 ssh_runner.go:195] Run: systemctl --version
	I0415 11:32:11.786160  375955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 11:32:11.805893  375955 kubeconfig.go:125] found "ha-676550" server: "https://192.168.39.254:8443"
	I0415 11:32:11.805935  375955 api_server.go:166] Checking apiserver status ...
	I0415 11:32:11.805992  375955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 11:32:11.826479  375955 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1121/cgroup
	W0415 11:32:11.838002  375955 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1121/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0415 11:32:11.838079  375955 ssh_runner.go:195] Run: ls
	I0415 11:32:11.850171  375955 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0415 11:32:11.855156  375955 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0415 11:32:11.855184  375955 status.go:422] ha-676550 apiserver status = Running (err=<nil>)
	I0415 11:32:11.855195  375955 status.go:257] ha-676550 status: &{Name:ha-676550 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 11:32:11.855212  375955 status.go:255] checking status of ha-676550-m02 ...
	I0415 11:32:11.855623  375955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:32:11.855688  375955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:32:11.872379  375955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40941
	I0415 11:32:11.873097  375955 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:32:11.873613  375955 main.go:141] libmachine: Using API Version  1
	I0415 11:32:11.873640  375955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:32:11.873978  375955 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:32:11.874207  375955 main.go:141] libmachine: (ha-676550-m02) Calling .GetState
	I0415 11:32:11.876109  375955 status.go:330] ha-676550-m02 host status = "Stopped" (err=<nil>)
	I0415 11:32:11.876126  375955 status.go:343] host is not running, skipping remaining checks
	I0415 11:32:11.876144  375955 status.go:257] ha-676550-m02 status: &{Name:ha-676550-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 11:32:11.876167  375955 status.go:255] checking status of ha-676550-m03 ...
	I0415 11:32:11.876464  375955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:32:11.876512  375955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:32:11.892425  375955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32865
	I0415 11:32:11.892888  375955 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:32:11.893436  375955 main.go:141] libmachine: Using API Version  1
	I0415 11:32:11.893457  375955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:32:11.893830  375955 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:32:11.894090  375955 main.go:141] libmachine: (ha-676550-m03) Calling .GetState
	I0415 11:32:11.896111  375955 status.go:330] ha-676550-m03 host status = "Running" (err=<nil>)
	I0415 11:32:11.896138  375955 host.go:66] Checking if "ha-676550-m03" exists ...
	I0415 11:32:11.896525  375955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:32:11.896578  375955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:32:11.913873  375955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35313
	I0415 11:32:11.914390  375955 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:32:11.914977  375955 main.go:141] libmachine: Using API Version  1
	I0415 11:32:11.915008  375955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:32:11.915345  375955 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:32:11.915602  375955 main.go:141] libmachine: (ha-676550-m03) Calling .GetIP
	I0415 11:32:11.919115  375955 main.go:141] libmachine: (ha-676550-m03) DBG | domain ha-676550-m03 has defined MAC address 52:54:00:b4:10:20 in network mk-ha-676550
	I0415 11:32:11.919506  375955 main.go:141] libmachine: (ha-676550-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:10:20", ip: ""} in network mk-ha-676550: {Iface:virbr1 ExpiryTime:2024-04-15 12:28:18 +0000 UTC Type:0 Mac:52:54:00:b4:10:20 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-676550-m03 Clientid:01:52:54:00:b4:10:20}
	I0415 11:32:11.919534  375955 main.go:141] libmachine: (ha-676550-m03) DBG | domain ha-676550-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:b4:10:20 in network mk-ha-676550
	I0415 11:32:11.919805  375955 host.go:66] Checking if "ha-676550-m03" exists ...
	I0415 11:32:11.920173  375955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:32:11.920226  375955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:32:11.938422  375955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35387
	I0415 11:32:11.939011  375955 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:32:11.939664  375955 main.go:141] libmachine: Using API Version  1
	I0415 11:32:11.939690  375955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:32:11.940117  375955 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:32:11.940749  375955 main.go:141] libmachine: (ha-676550-m03) Calling .DriverName
	I0415 11:32:11.941052  375955 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 11:32:11.941101  375955 main.go:141] libmachine: (ha-676550-m03) Calling .GetSSHHostname
	I0415 11:32:11.944787  375955 main.go:141] libmachine: (ha-676550-m03) DBG | domain ha-676550-m03 has defined MAC address 52:54:00:b4:10:20 in network mk-ha-676550
	I0415 11:32:11.945313  375955 main.go:141] libmachine: (ha-676550-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:10:20", ip: ""} in network mk-ha-676550: {Iface:virbr1 ExpiryTime:2024-04-15 12:28:18 +0000 UTC Type:0 Mac:52:54:00:b4:10:20 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-676550-m03 Clientid:01:52:54:00:b4:10:20}
	I0415 11:32:11.945348  375955 main.go:141] libmachine: (ha-676550-m03) DBG | domain ha-676550-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:b4:10:20 in network mk-ha-676550
	I0415 11:32:11.945490  375955 main.go:141] libmachine: (ha-676550-m03) Calling .GetSSHPort
	I0415 11:32:11.945723  375955 main.go:141] libmachine: (ha-676550-m03) Calling .GetSSHKeyPath
	I0415 11:32:11.946024  375955 main.go:141] libmachine: (ha-676550-m03) Calling .GetSSHUsername
	I0415 11:32:11.946277  375955 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18644-354432/.minikube/machines/ha-676550-m03/id_rsa Username:docker}
	I0415 11:32:12.034896  375955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 11:32:12.055217  375955 kubeconfig.go:125] found "ha-676550" server: "https://192.168.39.254:8443"
	I0415 11:32:12.055250  375955 api_server.go:166] Checking apiserver status ...
	I0415 11:32:12.055301  375955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 11:32:12.072229  375955 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1272/cgroup
	W0415 11:32:12.083325  375955 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1272/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0415 11:32:12.083410  375955 ssh_runner.go:195] Run: ls
	I0415 11:32:12.088842  375955 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0415 11:32:12.093944  375955 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0415 11:32:12.093969  375955 status.go:422] ha-676550-m03 apiserver status = Running (err=<nil>)
	I0415 11:32:12.093978  375955 status.go:257] ha-676550-m03 status: &{Name:ha-676550-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 11:32:12.093995  375955 status.go:255] checking status of ha-676550-m04 ...
	I0415 11:32:12.094357  375955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:32:12.094400  375955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:32:12.110209  375955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35677
	I0415 11:32:12.110849  375955 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:32:12.111431  375955 main.go:141] libmachine: Using API Version  1
	I0415 11:32:12.111479  375955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:32:12.111988  375955 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:32:12.112242  375955 main.go:141] libmachine: (ha-676550-m04) Calling .GetState
	I0415 11:32:12.114258  375955 status.go:330] ha-676550-m04 host status = "Running" (err=<nil>)
	I0415 11:32:12.114280  375955 host.go:66] Checking if "ha-676550-m04" exists ...
	I0415 11:32:12.114687  375955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:32:12.114735  375955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:32:12.130006  375955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45041
	I0415 11:32:12.130501  375955 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:32:12.131009  375955 main.go:141] libmachine: Using API Version  1
	I0415 11:32:12.131034  375955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:32:12.131354  375955 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:32:12.131548  375955 main.go:141] libmachine: (ha-676550-m04) Calling .GetIP
	I0415 11:32:12.134858  375955 main.go:141] libmachine: (ha-676550-m04) DBG | domain ha-676550-m04 has defined MAC address 52:54:00:87:88:7f in network mk-ha-676550
	I0415 11:32:12.135473  375955 main.go:141] libmachine: (ha-676550-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:88:7f", ip: ""} in network mk-ha-676550: {Iface:virbr1 ExpiryTime:2024-04-15 12:29:55 +0000 UTC Type:0 Mac:52:54:00:87:88:7f Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-676550-m04 Clientid:01:52:54:00:87:88:7f}
	I0415 11:32:12.135511  375955 main.go:141] libmachine: (ha-676550-m04) DBG | domain ha-676550-m04 has defined IP address 192.168.39.234 and MAC address 52:54:00:87:88:7f in network mk-ha-676550
	I0415 11:32:12.135698  375955 host.go:66] Checking if "ha-676550-m04" exists ...
	I0415 11:32:12.136034  375955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:32:12.136085  375955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:32:12.151667  375955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37269
	I0415 11:32:12.152134  375955 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:32:12.152739  375955 main.go:141] libmachine: Using API Version  1
	I0415 11:32:12.152772  375955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:32:12.153136  375955 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:32:12.153374  375955 main.go:141] libmachine: (ha-676550-m04) Calling .DriverName
	I0415 11:32:12.153616  375955 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 11:32:12.153658  375955 main.go:141] libmachine: (ha-676550-m04) Calling .GetSSHHostname
	I0415 11:32:12.157334  375955 main.go:141] libmachine: (ha-676550-m04) DBG | domain ha-676550-m04 has defined MAC address 52:54:00:87:88:7f in network mk-ha-676550
	I0415 11:32:12.157810  375955 main.go:141] libmachine: (ha-676550-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:88:7f", ip: ""} in network mk-ha-676550: {Iface:virbr1 ExpiryTime:2024-04-15 12:29:55 +0000 UTC Type:0 Mac:52:54:00:87:88:7f Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:ha-676550-m04 Clientid:01:52:54:00:87:88:7f}
	I0415 11:32:12.157846  375955 main.go:141] libmachine: (ha-676550-m04) DBG | domain ha-676550-m04 has defined IP address 192.168.39.234 and MAC address 52:54:00:87:88:7f in network mk-ha-676550
	I0415 11:32:12.158027  375955 main.go:141] libmachine: (ha-676550-m04) Calling .GetSSHPort
	I0415 11:32:12.158250  375955 main.go:141] libmachine: (ha-676550-m04) Calling .GetSSHKeyPath
	I0415 11:32:12.158416  375955 main.go:141] libmachine: (ha-676550-m04) Calling .GetSSHUsername
	I0415 11:32:12.158634  375955 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18644-354432/.minikube/machines/ha-676550-m04/id_rsa Username:docker}
	I0415 11:32:12.244714  375955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 11:32:12.262057  375955 status.go:257] ha-676550-m04 status: &{Name:ha-676550-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (92.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (44.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 node start m02 -v=7 --alsologtostderr
E0415 11:32:43.079491  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/functional-042762/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-676550 node start m02 -v=7 --alsologtostderr: (43.866906656s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (44.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (446.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-676550 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-676550 -v=7 --alsologtostderr
E0415 11:33:33.602210  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.crt: no such file or directory
E0415 11:34:59.234361  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/functional-042762/client.crt: no such file or directory
E0415 11:35:26.920290  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/functional-042762/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-676550 -v=7 --alsologtostderr: (4m37.219544805s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-676550 --wait=true -v=7 --alsologtostderr
E0415 11:38:33.601852  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.crt: no such file or directory
E0415 11:39:56.648876  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.crt: no such file or directory
E0415 11:39:59.234996  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/functional-042762/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-676550 --wait=true -v=7 --alsologtostderr: (2m49.066129004s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-676550
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (446.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-676550 node delete m03 -v=7 --alsologtostderr: (7.01041138s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (275.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 stop -v=7 --alsologtostderr
E0415 11:43:33.602061  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.crt: no such file or directory
E0415 11:44:59.234644  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/functional-042762/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-676550 stop -v=7 --alsologtostderr: (4m35.597278105s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-676550 status -v=7 --alsologtostderr: exit status 7 (124.057246ms)

                                                
                                                
-- stdout --
	ha-676550
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-676550-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-676550-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 11:45:08.429468  379801 out.go:291] Setting OutFile to fd 1 ...
	I0415 11:45:08.429633  379801 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:45:08.429645  379801 out.go:304] Setting ErrFile to fd 2...
	I0415 11:45:08.429651  379801 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:45:08.429874  379801 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18644-354432/.minikube/bin
	I0415 11:45:08.430066  379801 out.go:298] Setting JSON to false
	I0415 11:45:08.430102  379801 mustload.go:65] Loading cluster: ha-676550
	I0415 11:45:08.430212  379801 notify.go:220] Checking for updates...
	I0415 11:45:08.430599  379801 config.go:182] Loaded profile config "ha-676550": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0415 11:45:08.430619  379801 status.go:255] checking status of ha-676550 ...
	I0415 11:45:08.431024  379801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:45:08.431107  379801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:45:08.453247  379801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34505
	I0415 11:45:08.453742  379801 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:45:08.454418  379801 main.go:141] libmachine: Using API Version  1
	I0415 11:45:08.454447  379801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:45:08.454910  379801 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:45:08.455177  379801 main.go:141] libmachine: (ha-676550) Calling .GetState
	I0415 11:45:08.456964  379801 status.go:330] ha-676550 host status = "Stopped" (err=<nil>)
	I0415 11:45:08.456980  379801 status.go:343] host is not running, skipping remaining checks
	I0415 11:45:08.456989  379801 status.go:257] ha-676550 status: &{Name:ha-676550 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 11:45:08.457034  379801 status.go:255] checking status of ha-676550-m02 ...
	I0415 11:45:08.457316  379801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:45:08.457356  379801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:45:08.472328  379801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33821
	I0415 11:45:08.472803  379801 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:45:08.473348  379801 main.go:141] libmachine: Using API Version  1
	I0415 11:45:08.473379  379801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:45:08.473729  379801 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:45:08.473994  379801 main.go:141] libmachine: (ha-676550-m02) Calling .GetState
	I0415 11:45:08.475535  379801 status.go:330] ha-676550-m02 host status = "Stopped" (err=<nil>)
	I0415 11:45:08.475551  379801 status.go:343] host is not running, skipping remaining checks
	I0415 11:45:08.475559  379801 status.go:257] ha-676550-m02 status: &{Name:ha-676550-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 11:45:08.475581  379801 status.go:255] checking status of ha-676550-m04 ...
	I0415 11:45:08.475910  379801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:45:08.475971  379801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:45:08.490891  379801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42151
	I0415 11:45:08.491348  379801 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:45:08.491853  379801 main.go:141] libmachine: Using API Version  1
	I0415 11:45:08.491874  379801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:45:08.492202  379801 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:45:08.492431  379801 main.go:141] libmachine: (ha-676550-m04) Calling .GetState
	I0415 11:45:08.493880  379801 status.go:330] ha-676550-m04 host status = "Stopped" (err=<nil>)
	I0415 11:45:08.493899  379801 status.go:343] host is not running, skipping remaining checks
	I0415 11:45:08.493907  379801 status.go:257] ha-676550-m04 status: &{Name:ha-676550-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (275.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (155.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-676550 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0415 11:46:22.280712  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/functional-042762/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-676550 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m34.464897779s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (155.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (71.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-676550 --control-plane -v=7 --alsologtostderr
E0415 11:48:33.602071  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-676550 --control-plane -v=7 --alsologtostderr: (1m11.109790119s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-676550 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (71.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.57s)

                                                
                                    
x
+
TestJSONOutput/start/Command (98.54s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-822513 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
E0415 11:49:59.234925  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/functional-042762/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-822513 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m38.539074193s)
--- PASS: TestJSONOutput/start/Command (98.54s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-822513 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-822513 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.71s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-822513 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-822513 --output=json --user=testUser: (6.711541182s)
--- PASS: TestJSONOutput/stop/Command (6.71s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-280167 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-280167 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (88.721005ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"572fe5ec-8923-48ee-8239-48d96a29484e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-280167] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a17f9e6b-3539-4a87-827d-5d2a02f94416","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18644"}}
	{"specversion":"1.0","id":"f22889b1-5654-4c13-8ec2-13d6bb363b1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7dbb340a-de31-47af-b9c9-d82474a69ea6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18644-354432/kubeconfig"}}
	{"specversion":"1.0","id":"d66edaac-50a7-455a-9c55-b7d8b1212010","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18644-354432/.minikube"}}
	{"specversion":"1.0","id":"69f2a3a0-9cfc-41e6-9514-bd1cf2be7bf6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e38933fc-021b-4400-aac2-e42fdd1db6e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4b1167dc-114b-459e-92f5-bf83f626090b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-280167" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-280167
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (89.96s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-088787 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-088787 --driver=kvm2  --container-runtime=containerd: (43.981866948s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-091661 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-091661 --driver=kvm2  --container-runtime=containerd: (43.44601467s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-088787
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-091661
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-091661" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-091661
helpers_test.go:175: Cleaning up "first-088787" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-088787
--- PASS: TestMinikubeProfile (89.96s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.92s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-650275 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-650275 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (28.92224509s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-650275 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-650275 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.28s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-670125 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-670125 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (28.2822698s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-670125 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-670125 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-650275 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-670125 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-670125 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.39s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-670125
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-670125: (1.391281217s)
--- PASS: TestMountStart/serial/Stop (1.39s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.07s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-670125
E0415 11:53:33.601833  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-670125: (22.072364914s)
--- PASS: TestMountStart/serial/RestartStopped (23.07s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-670125 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-670125 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (133.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-607076 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0415 11:54:59.235007  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/functional-042762/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-607076 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m13.033507742s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (133.49s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-607076 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-607076 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-607076 -- rollout status deployment/busybox: (3.595199941s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-607076 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-607076 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-607076 -- exec busybox-7fdf7869d9-nkxc8 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-607076 -- exec busybox-7fdf7869d9-xfjd8 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-607076 -- exec busybox-7fdf7869d9-nkxc8 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-607076 -- exec busybox-7fdf7869d9-xfjd8 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-607076 -- exec busybox-7fdf7869d9-nkxc8 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-607076 -- exec busybox-7fdf7869d9-xfjd8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.28s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-607076 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-607076 -- exec busybox-7fdf7869d9-nkxc8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-607076 -- exec busybox-7fdf7869d9-nkxc8 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-607076 -- exec busybox-7fdf7869d9-xfjd8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-607076 -- exec busybox-7fdf7869d9-xfjd8 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (39.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-607076 -v 3 --alsologtostderr
E0415 11:56:36.649906  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-607076 -v 3 --alsologtostderr: (39.304684656s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (39.91s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-607076 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 cp testdata/cp-test.txt multinode-607076:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 ssh -n multinode-607076 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 cp multinode-607076:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile638363876/001/cp-test_multinode-607076.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 ssh -n multinode-607076 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 cp multinode-607076:/home/docker/cp-test.txt multinode-607076-m02:/home/docker/cp-test_multinode-607076_multinode-607076-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 ssh -n multinode-607076 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 ssh -n multinode-607076-m02 "sudo cat /home/docker/cp-test_multinode-607076_multinode-607076-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 cp multinode-607076:/home/docker/cp-test.txt multinode-607076-m03:/home/docker/cp-test_multinode-607076_multinode-607076-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 ssh -n multinode-607076 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 ssh -n multinode-607076-m03 "sudo cat /home/docker/cp-test_multinode-607076_multinode-607076-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 cp testdata/cp-test.txt multinode-607076-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 ssh -n multinode-607076-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 cp multinode-607076-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile638363876/001/cp-test_multinode-607076-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 ssh -n multinode-607076-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 cp multinode-607076-m02:/home/docker/cp-test.txt multinode-607076:/home/docker/cp-test_multinode-607076-m02_multinode-607076.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 ssh -n multinode-607076-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 ssh -n multinode-607076 "sudo cat /home/docker/cp-test_multinode-607076-m02_multinode-607076.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 cp multinode-607076-m02:/home/docker/cp-test.txt multinode-607076-m03:/home/docker/cp-test_multinode-607076-m02_multinode-607076-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 ssh -n multinode-607076-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 ssh -n multinode-607076-m03 "sudo cat /home/docker/cp-test_multinode-607076-m02_multinode-607076-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 cp testdata/cp-test.txt multinode-607076-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 ssh -n multinode-607076-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 cp multinode-607076-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile638363876/001/cp-test_multinode-607076-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 ssh -n multinode-607076-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 cp multinode-607076-m03:/home/docker/cp-test.txt multinode-607076:/home/docker/cp-test_multinode-607076-m03_multinode-607076.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 ssh -n multinode-607076-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 ssh -n multinode-607076 "sudo cat /home/docker/cp-test_multinode-607076-m03_multinode-607076.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 cp multinode-607076-m03:/home/docker/cp-test.txt multinode-607076-m02:/home/docker/cp-test_multinode-607076-m03_multinode-607076-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 ssh -n multinode-607076-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 ssh -n multinode-607076-m02 "sudo cat /home/docker/cp-test_multinode-607076-m03_multinode-607076-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.90s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-607076 node stop m03: (1.448694873s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-607076 status: exit status 7 (464.529059ms)

                                                
                                                
-- stdout --
	multinode-607076
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-607076-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-607076-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-607076 status --alsologtostderr: exit status 7 (474.431206ms)

                                                
                                                
-- stdout --
	multinode-607076
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-607076-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-607076-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 11:56:53.933572  387498 out.go:291] Setting OutFile to fd 1 ...
	I0415 11:56:53.933694  387498 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:56:53.933701  387498 out.go:304] Setting ErrFile to fd 2...
	I0415 11:56:53.933708  387498 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:56:53.933961  387498 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18644-354432/.minikube/bin
	I0415 11:56:53.934233  387498 out.go:298] Setting JSON to false
	I0415 11:56:53.934268  387498 mustload.go:65] Loading cluster: multinode-607076
	I0415 11:56:53.934408  387498 notify.go:220] Checking for updates...
	I0415 11:56:53.934650  387498 config.go:182] Loaded profile config "multinode-607076": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0415 11:56:53.934665  387498 status.go:255] checking status of multinode-607076 ...
	I0415 11:56:53.935059  387498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:56:53.935120  387498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:56:53.954618  387498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35329
	I0415 11:56:53.955121  387498 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:56:53.955794  387498 main.go:141] libmachine: Using API Version  1
	I0415 11:56:53.955820  387498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:56:53.956270  387498 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:56:53.956538  387498 main.go:141] libmachine: (multinode-607076) Calling .GetState
	I0415 11:56:53.958341  387498 status.go:330] multinode-607076 host status = "Running" (err=<nil>)
	I0415 11:56:53.958361  387498 host.go:66] Checking if "multinode-607076" exists ...
	I0415 11:56:53.958799  387498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:56:53.958889  387498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:56:53.975123  387498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43591
	I0415 11:56:53.975601  387498 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:56:53.976224  387498 main.go:141] libmachine: Using API Version  1
	I0415 11:56:53.976250  387498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:56:53.976616  387498 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:56:53.976850  387498 main.go:141] libmachine: (multinode-607076) Calling .GetIP
	I0415 11:56:53.979997  387498 main.go:141] libmachine: (multinode-607076) DBG | domain multinode-607076 has defined MAC address 52:54:00:7f:ef:e3 in network mk-multinode-607076
	I0415 11:56:53.980498  387498 main.go:141] libmachine: (multinode-607076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:ef:e3", ip: ""} in network mk-multinode-607076: {Iface:virbr1 ExpiryTime:2024-04-15 12:53:59 +0000 UTC Type:0 Mac:52:54:00:7f:ef:e3 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:multinode-607076 Clientid:01:52:54:00:7f:ef:e3}
	I0415 11:56:53.980527  387498 main.go:141] libmachine: (multinode-607076) DBG | domain multinode-607076 has defined IP address 192.168.39.155 and MAC address 52:54:00:7f:ef:e3 in network mk-multinode-607076
	I0415 11:56:53.980733  387498 host.go:66] Checking if "multinode-607076" exists ...
	I0415 11:56:53.981061  387498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:56:53.981113  387498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:56:53.997411  387498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44303
	I0415 11:56:53.997991  387498 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:56:53.998503  387498 main.go:141] libmachine: Using API Version  1
	I0415 11:56:53.998525  387498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:56:53.998914  387498 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:56:53.999093  387498 main.go:141] libmachine: (multinode-607076) Calling .DriverName
	I0415 11:56:53.999262  387498 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 11:56:53.999286  387498 main.go:141] libmachine: (multinode-607076) Calling .GetSSHHostname
	I0415 11:56:54.003064  387498 main.go:141] libmachine: (multinode-607076) DBG | domain multinode-607076 has defined MAC address 52:54:00:7f:ef:e3 in network mk-multinode-607076
	I0415 11:56:54.003593  387498 main.go:141] libmachine: (multinode-607076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:ef:e3", ip: ""} in network mk-multinode-607076: {Iface:virbr1 ExpiryTime:2024-04-15 12:53:59 +0000 UTC Type:0 Mac:52:54:00:7f:ef:e3 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:multinode-607076 Clientid:01:52:54:00:7f:ef:e3}
	I0415 11:56:54.003634  387498 main.go:141] libmachine: (multinode-607076) DBG | domain multinode-607076 has defined IP address 192.168.39.155 and MAC address 52:54:00:7f:ef:e3 in network mk-multinode-607076
	I0415 11:56:54.003855  387498 main.go:141] libmachine: (multinode-607076) Calling .GetSSHPort
	I0415 11:56:54.004118  387498 main.go:141] libmachine: (multinode-607076) Calling .GetSSHKeyPath
	I0415 11:56:54.004301  387498 main.go:141] libmachine: (multinode-607076) Calling .GetSSHUsername
	I0415 11:56:54.004504  387498 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18644-354432/.minikube/machines/multinode-607076/id_rsa Username:docker}
	I0415 11:56:54.092566  387498 ssh_runner.go:195] Run: systemctl --version
	I0415 11:56:54.099735  387498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 11:56:54.120259  387498 kubeconfig.go:125] found "multinode-607076" server: "https://192.168.39.155:8443"
	I0415 11:56:54.120304  387498 api_server.go:166] Checking apiserver status ...
	I0415 11:56:54.120350  387498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 11:56:54.137758  387498 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1177/cgroup
	W0415 11:56:54.149426  387498 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1177/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0415 11:56:54.149514  387498 ssh_runner.go:195] Run: ls
	I0415 11:56:54.154963  387498 api_server.go:253] Checking apiserver healthz at https://192.168.39.155:8443/healthz ...
	I0415 11:56:54.159727  387498 api_server.go:279] https://192.168.39.155:8443/healthz returned 200:
	ok
	I0415 11:56:54.159786  387498 status.go:422] multinode-607076 apiserver status = Running (err=<nil>)
	I0415 11:56:54.159822  387498 status.go:257] multinode-607076 status: &{Name:multinode-607076 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 11:56:54.159851  387498 status.go:255] checking status of multinode-607076-m02 ...
	I0415 11:56:54.160200  387498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:56:54.160249  387498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:56:54.176044  387498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34573
	I0415 11:56:54.176541  387498 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:56:54.177212  387498 main.go:141] libmachine: Using API Version  1
	I0415 11:56:54.177236  387498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:56:54.177583  387498 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:56:54.177824  387498 main.go:141] libmachine: (multinode-607076-m02) Calling .GetState
	I0415 11:56:54.179664  387498 status.go:330] multinode-607076-m02 host status = "Running" (err=<nil>)
	I0415 11:56:54.179683  387498 host.go:66] Checking if "multinode-607076-m02" exists ...
	I0415 11:56:54.180008  387498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:56:54.180074  387498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:56:54.195938  387498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44805
	I0415 11:56:54.196449  387498 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:56:54.196977  387498 main.go:141] libmachine: Using API Version  1
	I0415 11:56:54.197000  387498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:56:54.197354  387498 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:56:54.197650  387498 main.go:141] libmachine: (multinode-607076-m02) Calling .GetIP
	I0415 11:56:54.201034  387498 main.go:141] libmachine: (multinode-607076-m02) DBG | domain multinode-607076-m02 has defined MAC address 52:54:00:7a:c1:4e in network mk-multinode-607076
	I0415 11:56:54.201632  387498 main.go:141] libmachine: (multinode-607076-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:c1:4e", ip: ""} in network mk-multinode-607076: {Iface:virbr1 ExpiryTime:2024-04-15 12:55:31 +0000 UTC Type:0 Mac:52:54:00:7a:c1:4e Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-607076-m02 Clientid:01:52:54:00:7a:c1:4e}
	I0415 11:56:54.201672  387498 main.go:141] libmachine: (multinode-607076-m02) DBG | domain multinode-607076-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:7a:c1:4e in network mk-multinode-607076
	I0415 11:56:54.201832  387498 host.go:66] Checking if "multinode-607076-m02" exists ...
	I0415 11:56:54.202164  387498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:56:54.202203  387498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:56:54.218657  387498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43079
	I0415 11:56:54.219118  387498 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:56:54.219596  387498 main.go:141] libmachine: Using API Version  1
	I0415 11:56:54.219620  387498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:56:54.219961  387498 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:56:54.220213  387498 main.go:141] libmachine: (multinode-607076-m02) Calling .DriverName
	I0415 11:56:54.220521  387498 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 11:56:54.220549  387498 main.go:141] libmachine: (multinode-607076-m02) Calling .GetSSHHostname
	I0415 11:56:54.223850  387498 main.go:141] libmachine: (multinode-607076-m02) DBG | domain multinode-607076-m02 has defined MAC address 52:54:00:7a:c1:4e in network mk-multinode-607076
	I0415 11:56:54.224393  387498 main.go:141] libmachine: (multinode-607076-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:c1:4e", ip: ""} in network mk-multinode-607076: {Iface:virbr1 ExpiryTime:2024-04-15 12:55:31 +0000 UTC Type:0 Mac:52:54:00:7a:c1:4e Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-607076-m02 Clientid:01:52:54:00:7a:c1:4e}
	I0415 11:56:54.224424  387498 main.go:141] libmachine: (multinode-607076-m02) DBG | domain multinode-607076-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:7a:c1:4e in network mk-multinode-607076
	I0415 11:56:54.224596  387498 main.go:141] libmachine: (multinode-607076-m02) Calling .GetSSHPort
	I0415 11:56:54.224788  387498 main.go:141] libmachine: (multinode-607076-m02) Calling .GetSSHKeyPath
	I0415 11:56:54.224999  387498 main.go:141] libmachine: (multinode-607076-m02) Calling .GetSSHUsername
	I0415 11:56:54.225144  387498 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18644-354432/.minikube/machines/multinode-607076-m02/id_rsa Username:docker}
	I0415 11:56:54.311334  387498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 11:56:54.327498  387498 status.go:257] multinode-607076-m02 status: &{Name:multinode-607076-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0415 11:56:54.327552  387498 status.go:255] checking status of multinode-607076-m03 ...
	I0415 11:56:54.327993  387498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 11:56:54.328047  387498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 11:56:54.344071  387498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37749
	I0415 11:56:54.344535  387498 main.go:141] libmachine: () Calling .GetVersion
	I0415 11:56:54.345022  387498 main.go:141] libmachine: Using API Version  1
	I0415 11:56:54.345049  387498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 11:56:54.345456  387498 main.go:141] libmachine: () Calling .GetMachineName
	I0415 11:56:54.345688  387498 main.go:141] libmachine: (multinode-607076-m03) Calling .GetState
	I0415 11:56:54.347483  387498 status.go:330] multinode-607076-m03 host status = "Stopped" (err=<nil>)
	I0415 11:56:54.347499  387498 status.go:343] host is not running, skipping remaining checks
	I0415 11:56:54.347505  387498 status.go:257] multinode-607076-m03 status: &{Name:multinode-607076-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.39s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (26.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-607076 node start m03 -v=7 --alsologtostderr: (25.59890931s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (26.27s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (291.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-607076
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-607076
E0415 11:58:33.603266  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.crt: no such file or directory
E0415 11:59:59.235872  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/functional-042762/client.crt: no such file or directory
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-607076: (3m4.731416297s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-607076 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-607076 --wait=true -v=8 --alsologtostderr: (1m46.929103256s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-607076
--- PASS: TestMultiNode/serial/RestartKeepsNodes (291.79s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-607076 node delete m03: (1.857366397s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.42s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (183.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 stop
E0415 12:03:02.281636  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/functional-042762/client.crt: no such file or directory
E0415 12:03:33.603359  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.crt: no such file or directory
E0415 12:04:59.235932  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/functional-042762/client.crt: no such file or directory
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-607076 stop: (3m3.265505254s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-607076 status: exit status 7 (105.377397ms)

                                                
                                                
-- stdout --
	multinode-607076
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-607076-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-607076 status --alsologtostderr: exit status 7 (96.186367ms)

                                                
                                                
-- stdout --
	multinode-607076
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-607076-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 12:05:18.250768  390103 out.go:291] Setting OutFile to fd 1 ...
	I0415 12:05:18.250914  390103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 12:05:18.250929  390103 out.go:304] Setting ErrFile to fd 2...
	I0415 12:05:18.250933  390103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 12:05:18.251134  390103 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18644-354432/.minikube/bin
	I0415 12:05:18.251311  390103 out.go:298] Setting JSON to false
	I0415 12:05:18.251341  390103 mustload.go:65] Loading cluster: multinode-607076
	I0415 12:05:18.251475  390103 notify.go:220] Checking for updates...
	I0415 12:05:18.251716  390103 config.go:182] Loaded profile config "multinode-607076": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0415 12:05:18.251733  390103 status.go:255] checking status of multinode-607076 ...
	I0415 12:05:18.252109  390103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 12:05:18.252169  390103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 12:05:18.267182  390103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35459
	I0415 12:05:18.267654  390103 main.go:141] libmachine: () Calling .GetVersion
	I0415 12:05:18.268297  390103 main.go:141] libmachine: Using API Version  1
	I0415 12:05:18.268319  390103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 12:05:18.268672  390103 main.go:141] libmachine: () Calling .GetMachineName
	I0415 12:05:18.268856  390103 main.go:141] libmachine: (multinode-607076) Calling .GetState
	I0415 12:05:18.270512  390103 status.go:330] multinode-607076 host status = "Stopped" (err=<nil>)
	I0415 12:05:18.270530  390103 status.go:343] host is not running, skipping remaining checks
	I0415 12:05:18.270538  390103 status.go:257] multinode-607076 status: &{Name:multinode-607076 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 12:05:18.270591  390103 status.go:255] checking status of multinode-607076-m02 ...
	I0415 12:05:18.271013  390103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0415 12:05:18.271060  390103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 12:05:18.286679  390103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39855
	I0415 12:05:18.287080  390103 main.go:141] libmachine: () Calling .GetVersion
	I0415 12:05:18.287538  390103 main.go:141] libmachine: Using API Version  1
	I0415 12:05:18.287567  390103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 12:05:18.288005  390103 main.go:141] libmachine: () Calling .GetMachineName
	I0415 12:05:18.288234  390103 main.go:141] libmachine: (multinode-607076-m02) Calling .GetState
	I0415 12:05:18.289996  390103 status.go:330] multinode-607076-m02 host status = "Stopped" (err=<nil>)
	I0415 12:05:18.290016  390103 status.go:343] host is not running, skipping remaining checks
	I0415 12:05:18.290021  390103 status.go:257] multinode-607076-m02 status: &{Name:multinode-607076-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (183.47s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (78.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-607076 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-607076 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m17.74047958s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-607076 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (78.29s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (48.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-607076
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-607076-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-607076-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (79.636522ms)

                                                
                                                
-- stdout --
	* [multinode-607076-m02] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18644-354432/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18644-354432/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-607076-m02' is duplicated with machine name 'multinode-607076-m02' in profile 'multinode-607076'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-607076-m03 --driver=kvm2  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-607076-m03 --driver=kvm2  --container-runtime=containerd: (47.13182254s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-607076
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-607076: exit status 80 (234.994264ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-607076 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-607076-m03 already exists in multinode-607076-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-607076-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (48.51s)

                                                
                                    
x
+
TestPreload (312.64s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-852828 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0415 12:08:33.602867  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.crt: no such file or directory
E0415 12:09:59.235129  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/functional-042762/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-852828 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (2m40.17378299s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-852828 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-852828 image pull gcr.io/k8s-minikube/busybox: (2.451354651s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-852828
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-852828: (1m31.597447807s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-852828 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-852828 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (57.084867079s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-852828 image list
helpers_test.go:175: Cleaning up "test-preload-852828" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-852828
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-852828: (1.082297271s)
--- PASS: TestPreload (312.64s)

                                                
                                    
x
+
TestScheduledStopUnix (117.85s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-605417 --memory=2048 --driver=kvm2  --container-runtime=containerd
E0415 12:13:16.650304  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-605417 --memory=2048 --driver=kvm2  --container-runtime=containerd: (46.053090972s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-605417 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-605417 -n scheduled-stop-605417
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-605417 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-605417 --cancel-scheduled
E0415 12:13:33.601963  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-605417 -n scheduled-stop-605417
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-605417
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-605417 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-605417
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-605417: exit status 7 (87.034969ms)

                                                
                                                
-- stdout --
	scheduled-stop-605417
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-605417 -n scheduled-stop-605417
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-605417 -n scheduled-stop-605417: exit status 7 (82.953405ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-605417" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-605417
--- PASS: TestScheduledStopUnix (117.85s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (226.51s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1615104052 start -p running-upgrade-306958 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E0415 12:14:59.234834  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/functional-042762/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1615104052 start -p running-upgrade-306958 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m7.947109274s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-306958 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-306958 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m34.857945027s)
helpers_test.go:175: Cleaning up "running-upgrade-306958" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-306958
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-306958: (1.231654782s)
--- PASS: TestRunningBinaryUpgrade (226.51s)

                                                
                                    
x
+
TestKubernetesUpgrade (179.46s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-972395 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0415 12:18:33.602398  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-972395 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m32.483834576s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-972395
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-972395: (1.699673662s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-972395 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-972395 status --format={{.Host}}: exit status 7 (107.626151ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-972395 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0415 12:19:59.235175  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/functional-042762/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-972395 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (53.040209607s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-972395 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-972395 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-972395 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (100.124862ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-972395] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18644-354432/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18644-354432/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0-rc.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-972395
	    minikube start -p kubernetes-upgrade-972395 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9723952 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-972395 --kubernetes-version=v1.30.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-972395 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-972395 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (30.796941345s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-972395" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-972395
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-972395: (1.16331919s)
--- PASS: TestKubernetesUpgrade (179.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-261325 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-261325 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (95.766339ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-261325] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18644-354432/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18644-354432/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (95.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-261325 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-261325 --driver=kvm2  --container-runtime=containerd: (1m35.117910279s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-261325 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (95.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-547820 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-547820 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (1.08394612s)

                                                
                                                
-- stdout --
	* [false-547820] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18644-354432/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18644-354432/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 12:15:50.618021  395565 out.go:291] Setting OutFile to fd 1 ...
	I0415 12:15:50.618214  395565 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 12:15:50.618233  395565 out.go:304] Setting ErrFile to fd 2...
	I0415 12:15:50.618241  395565 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 12:15:50.618473  395565 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18644-354432/.minikube/bin
	I0415 12:15:50.619115  395565 out.go:298] Setting JSON to false
	I0415 12:15:50.620307  395565 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7094,"bootTime":1713176257,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0415 12:15:50.620413  395565 start.go:139] virtualization: kvm guest
	I0415 12:15:50.623078  395565 out.go:177] * [false-547820] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0415 12:15:50.624851  395565 notify.go:220] Checking for updates...
	I0415 12:15:50.624856  395565 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 12:15:50.626491  395565 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 12:15:50.627988  395565 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18644-354432/kubeconfig
	I0415 12:15:50.629365  395565 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18644-354432/.minikube
	I0415 12:15:50.630848  395565 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0415 12:15:50.632481  395565 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 12:15:50.634681  395565 config.go:182] Loaded profile config "NoKubernetes-261325": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0415 12:15:50.634875  395565 config.go:182] Loaded profile config "offline-containerd-248594": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0415 12:15:50.635039  395565 config.go:182] Loaded profile config "running-upgrade-306958": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.1
	I0415 12:15:50.635176  395565 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 12:15:51.625314  395565 out.go:177] * Using the kvm2 driver based on user configuration
	I0415 12:15:51.626947  395565 start.go:297] selected driver: kvm2
	I0415 12:15:51.626966  395565 start.go:901] validating driver "kvm2" against <nil>
	I0415 12:15:51.626979  395565 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 12:15:51.629371  395565 out.go:177] 
	W0415 12:15:51.630558  395565 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0415 12:15:51.631907  395565 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-547820 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-547820

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-547820

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-547820

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-547820

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-547820

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-547820

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-547820

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-547820

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-547820

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-547820

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547820"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547820"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547820"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-547820

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547820"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547820"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-547820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-547820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-547820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-547820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-547820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-547820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-547820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-547820" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547820"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547820"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547820"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547820"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547820"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-547820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-547820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-547820" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547820"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547820"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547820"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547820"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547820"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18644-354432/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 15 Apr 2024 12:15:33 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: cluster_info
server: https://192.168.39.39:8443
name: offline-containerd-248594
contexts:
- context:
cluster: offline-containerd-248594
extensions:
- extension:
last-update: Mon, 15 Apr 2024 12:15:33 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: context_info
namespace: default
user: offline-containerd-248594
name: offline-containerd-248594
current-context: ""
kind: Config
preferences: {}
users:
- name: offline-containerd-248594
user:
client-certificate: /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/offline-containerd-248594/client.crt
client-key: /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/offline-containerd-248594/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-547820

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547820"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547820"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547820"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547820"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547820"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547820"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547820"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547820"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547820"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547820"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547820"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547820"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547820"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547820"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547820"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547820"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547820"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-547820"

                                                
                                                
----------------------- debugLogs end: false-547820 [took: 3.747257222s] --------------------------------
helpers_test.go:175: Cleaning up "false-547820" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-547820
--- PASS: TestNetworkPlugins/group/false (5.00s)

                                                
                                    
x
+
TestPause/serial/Start (76.05s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-903682 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-903682 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m16.054018095s)
--- PASS: TestPause/serial/Start (76.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (51.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-261325 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-261325 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (50.668696118s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-261325 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-261325 status -o json: exit status 2 (267.966656ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-261325","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-261325
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-261325: (1.044207554s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (51.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (35.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-261325 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-261325 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (35.214831484s)
--- PASS: TestNoKubernetes/serial/Start (35.22s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (57.31s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-903682 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-903682 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (57.28809489s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (57.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-261325 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-261325 "sudo systemctl is-active --quiet service kubelet": exit status 1 (224.508161ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-261325
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-261325: (1.449971642s)
--- PASS: TestNoKubernetes/serial/Stop (1.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (23.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-261325 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-261325 --driver=kvm2  --container-runtime=containerd: (23.818571737s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (23.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-261325 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-261325 "sudo systemctl is-active --quiet service kubelet": exit status 1 (219.838294ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestPause/serial/Pause (0.84s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-903682 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.84s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-903682 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-903682 --output=json --layout=cluster: exit status 2 (320.680706ms)

                                                
                                                
-- stdout --
	{"Name":"pause-903682","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-903682","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (1s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-903682 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (1.00s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.9s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-903682 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.90s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.87s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-903682 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-903682 --alsologtostderr -v=5: (1.866876617s)
--- PASS: TestPause/serial/DeletePaused (1.87s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.29s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (172.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2341628248 start -p stopped-upgrade-434039 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2341628248 start -p stopped-upgrade-434039 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m19.26584897s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2341628248 -p stopped-upgrade-434039 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2341628248 -p stopped-upgrade-434039 stop: (1.670595543s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-434039 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-434039 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m31.202309214s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (172.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (127.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-547820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
E0415 12:19:42.282453  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/functional-042762/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-547820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (2m7.529613246s)
--- PASS: TestNetworkPlugins/group/auto/Start (127.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (63.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-547820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-547820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m3.96409155s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (63.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (117.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-547820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-547820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m57.66954601s)
--- PASS: TestNetworkPlugins/group/calico/Start (117.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-547820 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-547820 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zxvkx" [bfc05d8c-ef88-4953-a15b-21a66b41fbcf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-zxvkx" [bfc05d8c-ef88-4953-a15b-21a66b41fbcf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004707008s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-547820 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-547820 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-547820 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.19s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-434039
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-434039: (1.187521447s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (110.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-547820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-547820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m50.938228304s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (110.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (147.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-547820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-547820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (2m27.757342618s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (147.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-zlshr" [d260de22-5c61-40cd-9974-7e99bd08eef7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.008906681s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-547820 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-547820 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-mhb99" [8f4c0708-7ae9-4b60-9970-6c009880ee0c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-mhb99" [8f4c0708-7ae9-4b60-9970-6c009880ee0c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.00510568s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-547820 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-547820 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-547820 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (98.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-547820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-547820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m38.121238714s)
--- PASS: TestNetworkPlugins/group/flannel/Start (98.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-x4vfn" [0374283b-f6e9-4070-a42f-315a45911c87] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006182745s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-547820 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-547820 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dn5nz" [67199759-2651-441c-821b-f5601d3b9f8a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0415 12:23:33.602323  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-dn5nz" [67199759-2651-441c-821b-f5601d3b9f8a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.005465869s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-547820 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-547820 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-547820 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-547820 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-547820 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7vkkh" [0d7d8fc1-280c-4501-81f7-c2d3d0cbc5d9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-7vkkh" [0d7d8fc1-280c-4501-81f7-c2d3d0cbc5d9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.013450546s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-547820 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-547820 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-547820 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (101.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-547820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-547820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m41.645584912s)
--- PASS: TestNetworkPlugins/group/bridge/Start (101.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (200.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-855937 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-855937 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (3m20.068356534s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (200.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-547820 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-547820 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-mgxvc" [a1d64724-eb1c-498c-b6aa-cd16df0f2173] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-mgxvc" [a1d64724-eb1c-498c-b6aa-cd16df0f2173] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.006036682s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-547820 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-547820 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-547820 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-dh4fd" [85712ed6-0c60-4c8e-b946-dc9d9df4b83e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005444622s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-547820 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-547820 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8bhg6" [874f509e-8860-4b3f-a858-bc456df44638] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8bhg6" [874f509e-8860-4b3f-a858-bc456df44638] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.009452806s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (132s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-570125 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-570125 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.2: (2m11.998880009s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (132.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-547820 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-547820 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-547820 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (71.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-674160 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-674160 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3: (1m11.966226248s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (71.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-547820 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-547820 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-f2sfk" [7c41c97d-920e-4844-ba6b-d535468961d7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-f2sfk" [7c41c97d-920e-4844-ba6b-d535468961d7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.005744759s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-547820 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-547820 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-547820 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)
E0415 12:32:55.988840  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/kindnet-547820/client.crt: no such file or directory
E0415 12:33:25.131595  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/calico-547820/client.crt: no such file or directory
E0415 12:33:27.450293  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/bridge-547820/client.crt: no such file or directory
E0415 12:33:33.601857  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (58.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-202731 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.2
E0415 12:26:28.872522  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/auto-547820/client.crt: no such file or directory
E0415 12:26:28.878010  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/auto-547820/client.crt: no such file or directory
E0415 12:26:28.888393  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/auto-547820/client.crt: no such file or directory
E0415 12:26:28.908706  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/auto-547820/client.crt: no such file or directory
E0415 12:26:28.949003  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/auto-547820/client.crt: no such file or directory
E0415 12:26:29.029384  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/auto-547820/client.crt: no such file or directory
E0415 12:26:29.190218  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/auto-547820/client.crt: no such file or directory
E0415 12:26:29.510439  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/auto-547820/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-202731 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.2: (58.54907753s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (58.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-674160 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [85c42881-8a1a-4061-8da6-c50578624c24] Pending
E0415 12:26:30.151277  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/auto-547820/client.crt: no such file or directory
helpers_test.go:344: "busybox" [85c42881-8a1a-4061-8da6-c50578624c24] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0415 12:26:31.432501  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/auto-547820/client.crt: no such file or directory
helpers_test.go:344: "busybox" [85c42881-8a1a-4061-8da6-c50578624c24] Running
E0415 12:26:33.993093  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/auto-547820/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004686316s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-674160 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-674160 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0415 12:26:39.114217  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/auto-547820/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-674160 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.179776023s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-674160 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (92.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-674160 --alsologtostderr -v=3
E0415 12:26:49.355103  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/auto-547820/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-674160 --alsologtostderr -v=3: (1m32.547745277s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (92.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-570125 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8ae6f08a-9779-44e2-9028-835afe2a8129] Pending
helpers_test.go:344: "busybox" [8ae6f08a-9779-44e2-9028-835afe2a8129] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8ae6f08a-9779-44e2-9028-835afe2a8129] Running
E0415 12:27:09.835824  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/auto-547820/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.00515614s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-570125 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-202731 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-202731 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.130483221s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-202731 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-202731 --alsologtostderr -v=3: (7.37357987s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-570125 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-570125 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (92.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-570125 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-570125 --alsologtostderr -v=3: (1m32.54205118s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (92.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-202731 -n newest-cni-202731
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-202731 -n newest-cni-202731: exit status 7 (86.423603ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-202731 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-202731 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.2
E0415 12:27:28.304521  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/kindnet-547820/client.crt: no such file or directory
E0415 12:27:28.309779  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/kindnet-547820/client.crt: no such file or directory
E0415 12:27:28.320066  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/kindnet-547820/client.crt: no such file or directory
E0415 12:27:28.341218  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/kindnet-547820/client.crt: no such file or directory
E0415 12:27:28.381615  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/kindnet-547820/client.crt: no such file or directory
E0415 12:27:28.462036  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/kindnet-547820/client.crt: no such file or directory
E0415 12:27:28.622561  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/kindnet-547820/client.crt: no such file or directory
E0415 12:27:28.943175  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/kindnet-547820/client.crt: no such file or directory
E0415 12:27:29.583611  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/kindnet-547820/client.crt: no such file or directory
E0415 12:27:30.864272  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/kindnet-547820/client.crt: no such file or directory
E0415 12:27:33.425147  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/kindnet-547820/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-202731 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.2: (36.087077963s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-202731 -n newest-cni-202731
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-855937 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e1f6c1fa-22b5-4fe8-ae87-23aa99999bea] Pending
helpers_test.go:344: "busybox" [e1f6c1fa-22b5-4fe8-ae87-23aa99999bea] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0415 12:27:38.545736  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/kindnet-547820/client.crt: no such file or directory
helpers_test.go:344: "busybox" [e1f6c1fa-22b5-4fe8-ae87-23aa99999bea] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004183552s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-855937 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-855937 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-855937 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (92.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-855937 --alsologtostderr -v=3
E0415 12:27:48.786257  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/kindnet-547820/client.crt: no such file or directory
E0415 12:27:50.796288  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/auto-547820/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-855937 --alsologtostderr -v=3: (1m32.548001716s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (92.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-202731 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-202731 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-202731 -n newest-cni-202731
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-202731 -n newest-cni-202731: exit status 2 (258.406607ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-202731 -n newest-cni-202731
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-202731 -n newest-cni-202731: exit status 2 (258.014736ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-202731 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-202731 -n newest-cni-202731
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-202731 -n newest-cni-202731
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (100.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-337994 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3
E0415 12:28:09.266525  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/kindnet-547820/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-337994 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3: (1m40.429016161s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (100.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-674160 -n default-k8s-diff-port-674160
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-674160 -n default-k8s-diff-port-674160: exit status 7 (86.737976ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-674160 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (322.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-674160 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3
E0415 12:28:25.131413  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/calico-547820/client.crt: no such file or directory
E0415 12:28:25.136709  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/calico-547820/client.crt: no such file or directory
E0415 12:28:25.147018  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/calico-547820/client.crt: no such file or directory
E0415 12:28:25.167390  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/calico-547820/client.crt: no such file or directory
E0415 12:28:25.208572  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/calico-547820/client.crt: no such file or directory
E0415 12:28:25.288946  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/calico-547820/client.crt: no such file or directory
E0415 12:28:25.449496  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/calico-547820/client.crt: no such file or directory
E0415 12:28:25.770545  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/calico-547820/client.crt: no such file or directory
E0415 12:28:26.411143  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/calico-547820/client.crt: no such file or directory
E0415 12:28:27.692042  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/calico-547820/client.crt: no such file or directory
E0415 12:28:30.252347  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/calico-547820/client.crt: no such file or directory
E0415 12:28:33.602056  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.crt: no such file or directory
E0415 12:28:35.372834  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/calico-547820/client.crt: no such file or directory
E0415 12:28:44.427807  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/custom-flannel-547820/client.crt: no such file or directory
E0415 12:28:44.433094  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/custom-flannel-547820/client.crt: no such file or directory
E0415 12:28:44.443339  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/custom-flannel-547820/client.crt: no such file or directory
E0415 12:28:44.464133  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/custom-flannel-547820/client.crt: no such file or directory
E0415 12:28:44.504428  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/custom-flannel-547820/client.crt: no such file or directory
E0415 12:28:44.584860  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/custom-flannel-547820/client.crt: no such file or directory
E0415 12:28:44.745533  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/custom-flannel-547820/client.crt: no such file or directory
E0415 12:28:45.066743  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/custom-flannel-547820/client.crt: no such file or directory
E0415 12:28:45.613326  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/calico-547820/client.crt: no such file or directory
E0415 12:28:45.707844  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/custom-flannel-547820/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-674160 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3: (5m22.563579561s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-674160 -n default-k8s-diff-port-674160
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (322.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-570125 -n no-preload-570125
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-570125 -n no-preload-570125: exit status 7 (89.720161ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-570125 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (322.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-570125 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.2
E0415 12:28:46.988270  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/custom-flannel-547820/client.crt: no such file or directory
E0415 12:28:49.548875  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/custom-flannel-547820/client.crt: no such file or directory
E0415 12:28:50.226966  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/kindnet-547820/client.crt: no such file or directory
E0415 12:28:54.669851  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/custom-flannel-547820/client.crt: no such file or directory
E0415 12:29:04.911032  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/custom-flannel-547820/client.crt: no such file or directory
E0415 12:29:06.094260  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/calico-547820/client.crt: no such file or directory
E0415 12:29:12.717492  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/auto-547820/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-570125 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.2: (5m22.502682513s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-570125 -n no-preload-570125
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (322.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-855937 -n old-k8s-version-855937
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-855937 -n old-k8s-version-855937: exit status 7 (102.373652ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-855937 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (199.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-855937 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
E0415 12:29:24.617897  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/enable-default-cni-547820/client.crt: no such file or directory
E0415 12:29:24.623217  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/enable-default-cni-547820/client.crt: no such file or directory
E0415 12:29:24.633337  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/enable-default-cni-547820/client.crt: no such file or directory
E0415 12:29:24.653894  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/enable-default-cni-547820/client.crt: no such file or directory
E0415 12:29:24.694312  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/enable-default-cni-547820/client.crt: no such file or directory
E0415 12:29:24.774695  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/enable-default-cni-547820/client.crt: no such file or directory
E0415 12:29:24.935200  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/enable-default-cni-547820/client.crt: no such file or directory
E0415 12:29:25.256134  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/enable-default-cni-547820/client.crt: no such file or directory
E0415 12:29:25.391479  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/custom-flannel-547820/client.crt: no such file or directory
E0415 12:29:25.896937  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/enable-default-cni-547820/client.crt: no such file or directory
E0415 12:29:27.177140  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/enable-default-cni-547820/client.crt: no such file or directory
E0415 12:29:29.737722  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/enable-default-cni-547820/client.crt: no such file or directory
E0415 12:29:34.858194  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/enable-default-cni-547820/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-855937 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (3m19.008269377s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-855937 -n old-k8s-version-855937
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (199.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-337994 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e6b63d65-d265-4c57-9b5e-166afa9fa8b4] Pending
helpers_test.go:344: "busybox" [e6b63d65-d265-4c57-9b5e-166afa9fa8b4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0415 12:29:41.964567  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/flannel-547820/client.crt: no such file or directory
E0415 12:29:41.969856  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/flannel-547820/client.crt: no such file or directory
E0415 12:29:41.980168  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/flannel-547820/client.crt: no such file or directory
E0415 12:29:42.000507  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/flannel-547820/client.crt: no such file or directory
E0415 12:29:42.041082  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/flannel-547820/client.crt: no such file or directory
E0415 12:29:42.124804  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/flannel-547820/client.crt: no such file or directory
E0415 12:29:42.285279  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/flannel-547820/client.crt: no such file or directory
E0415 12:29:42.605897  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/flannel-547820/client.crt: no such file or directory
E0415 12:29:43.246953  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/flannel-547820/client.crt: no such file or directory
E0415 12:29:44.527482  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/flannel-547820/client.crt: no such file or directory
helpers_test.go:344: "busybox" [e6b63d65-d265-4c57-9b5e-166afa9fa8b4] Running
E0415 12:29:45.098673  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/enable-default-cni-547820/client.crt: no such file or directory
E0415 12:29:47.055080  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/calico-547820/client.crt: no such file or directory
E0415 12:29:47.088355  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/flannel-547820/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.005648498s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-337994 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-337994 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-337994 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.200442814s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-337994 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (92.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-337994 --alsologtostderr -v=3
E0415 12:29:52.847617  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/flannel-547820/client.crt: no such file or directory
E0415 12:29:56.651062  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/addons-316289/client.crt: no such file or directory
E0415 12:29:59.234450  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/functional-042762/client.crt: no such file or directory
E0415 12:30:03.088811  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/flannel-547820/client.crt: no such file or directory
E0415 12:30:05.578896  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/enable-default-cni-547820/client.crt: no such file or directory
E0415 12:30:06.352632  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/custom-flannel-547820/client.crt: no such file or directory
E0415 12:30:12.148278  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/kindnet-547820/client.crt: no such file or directory
E0415 12:30:23.569609  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/flannel-547820/client.crt: no such file or directory
E0415 12:30:43.606093  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/bridge-547820/client.crt: no such file or directory
E0415 12:30:43.611462  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/bridge-547820/client.crt: no such file or directory
E0415 12:30:43.621749  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/bridge-547820/client.crt: no such file or directory
E0415 12:30:43.642066  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/bridge-547820/client.crt: no such file or directory
E0415 12:30:43.682444  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/bridge-547820/client.crt: no such file or directory
E0415 12:30:43.762989  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/bridge-547820/client.crt: no such file or directory
E0415 12:30:43.923405  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/bridge-547820/client.crt: no such file or directory
E0415 12:30:44.244110  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/bridge-547820/client.crt: no such file or directory
E0415 12:30:44.884570  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/bridge-547820/client.crt: no such file or directory
E0415 12:30:46.165265  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/bridge-547820/client.crt: no such file or directory
E0415 12:30:46.539763  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/enable-default-cni-547820/client.crt: no such file or directory
E0415 12:30:48.726291  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/bridge-547820/client.crt: no such file or directory
E0415 12:30:53.846600  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/bridge-547820/client.crt: no such file or directory
E0415 12:31:04.087722  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/bridge-547820/client.crt: no such file or directory
E0415 12:31:04.530548  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/flannel-547820/client.crt: no such file or directory
E0415 12:31:09.646473  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/calico-547820/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-337994 --alsologtostderr -v=3: (1m32.523235021s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (92.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-337994 -n embed-certs-337994
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-337994 -n embed-certs-337994: exit status 7 (88.523345ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-337994 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (316.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-337994 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3
E0415 12:31:24.568696  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/bridge-547820/client.crt: no such file or directory
E0415 12:31:28.273390  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/custom-flannel-547820/client.crt: no such file or directory
E0415 12:31:28.872945  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/auto-547820/client.crt: no such file or directory
E0415 12:31:56.558206  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/auto-547820/client.crt: no such file or directory
E0415 12:32:05.529310  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/bridge-547820/client.crt: no such file or directory
E0415 12:32:08.460449  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/enable-default-cni-547820/client.crt: no such file or directory
E0415 12:32:26.451513  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/flannel-547820/client.crt: no such file or directory
E0415 12:32:28.304853  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/kindnet-547820/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-337994 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3: (5m16.22852434s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-337994 -n embed-certs-337994
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (316.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-9jk46" [9530eff1-63ce-4075-a2dc-45fc7b5900de] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005124507s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-9jk46" [9530eff1-63ce-4075-a2dc-45fc7b5900de] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00583724s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-855937 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-855937 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-855937 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-855937 -n old-k8s-version-855937
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-855937 -n old-k8s-version-855937: exit status 2 (276.792697ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-855937 -n old-k8s-version-855937
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-855937 -n old-k8s-version-855937: exit status 2 (277.729871ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-855937 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-855937 -n old-k8s-version-855937
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-855937 -n old-k8s-version-855937
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (17.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-x5sdq" [85164bc9-47b6-4554-b216-2f92f63fc799] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0415 12:33:44.427565  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/custom-flannel-547820/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-x5sdq" [85164bc9-47b6-4554-b216-2f92f63fc799] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 17.005763975s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (17.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-x5sdq" [85164bc9-47b6-4554-b216-2f92f63fc799] Running
E0415 12:33:53.487713  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/calico-547820/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005078197s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-674160 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-674160 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-674160 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-674160 -n default-k8s-diff-port-674160
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-674160 -n default-k8s-diff-port-674160: exit status 2 (254.183309ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-674160 -n default-k8s-diff-port-674160
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-674160 -n default-k8s-diff-port-674160: exit status 2 (261.491693ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-674160 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-674160 -n default-k8s-diff-port-674160
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-674160 -n default-k8s-diff-port-674160
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-6tq6m" [1e9a0736-01e5-4e3a-91e8-989f88453101] Running
E0415 12:34:12.114434  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/custom-flannel-547820/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.007534579s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-6tq6m" [1e9a0736-01e5-4e3a-91e8-989f88453101] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004359442s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-570125 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-570125 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-570125 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-570125 -n no-preload-570125
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-570125 -n no-preload-570125: exit status 2 (255.105336ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-570125 -n no-preload-570125
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-570125 -n no-preload-570125: exit status 2 (262.693599ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-570125 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-570125 -n no-preload-570125
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-570125 -n no-preload-570125
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-zzj5p" [3add1010-0634-40a8-bb8d-00c26e81c142] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-zzj5p" [3add1010-0634-40a8-bb8d-00c26e81c142] Running
E0415 12:36:50.254848  361829 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/default-k8s-diff-port-674160/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.004707122s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-zzj5p" [3add1010-0634-40a8-bb8d-00c26e81c142] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004592699s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-337994 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-337994 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-337994 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-337994 -n embed-certs-337994
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-337994 -n embed-certs-337994: exit status 2 (249.445285ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-337994 -n embed-certs-337994
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-337994 -n embed-certs-337994: exit status 2 (249.260503ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-337994 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-337994 -n embed-certs-337994
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-337994 -n embed-certs-337994
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.61s)

                                                
                                    

Test skip (39/333)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.29.3/cached-images 0
15 TestDownloadOnly/v1.29.3/binaries 0
16 TestDownloadOnly/v1.29.3/kubectl 0
23 TestDownloadOnly/v1.30.0-rc.2/cached-images 0
24 TestDownloadOnly/v1.30.0-rc.2/binaries 0
25 TestDownloadOnly/v1.30.0-rc.2/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
123 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
125 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
126 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
184 TestImageBuild 0
211 TestKicCustomNetwork 0
212 TestKicExistingNetwork 0
213 TestKicCustomSubnet 0
214 TestKicStaticIP 0
246 TestChangeNoneUser 0
249 TestScheduledStopWindows 0
251 TestSkaffold 0
253 TestInsufficientStorage 0
257 TestMissingContainerUpgrade 0
263 TestNetworkPlugins/group/kubenet 3.49
271 TestNetworkPlugins/group/cilium 6.3
280 TestStartStop/group/disable-driver-mounts 0.19
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-547820 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-547820

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-547820

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-547820

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-547820

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-547820

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-547820

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-547820

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-547820

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-547820

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-547820

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547820"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547820"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547820"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-547820

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547820"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547820"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-547820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-547820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-547820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-547820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-547820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-547820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-547820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-547820" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547820"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547820"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547820"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547820"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547820"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-547820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-547820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-547820" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547820"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547820"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547820"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547820"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547820"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18644-354432/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 15 Apr 2024 12:15:33 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: cluster_info
server: https://192.168.39.39:8443
name: offline-containerd-248594
contexts:
- context:
cluster: offline-containerd-248594
extensions:
- extension:
last-update: Mon, 15 Apr 2024 12:15:33 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: context_info
namespace: default
user: offline-containerd-248594
name: offline-containerd-248594
current-context: ""
kind: Config
preferences: {}
users:
- name: offline-containerd-248594
user:
client-certificate: /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/offline-containerd-248594/client.crt
client-key: /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/offline-containerd-248594/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-547820

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547820"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547820"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547820"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547820"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547820"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547820"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547820"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547820"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547820"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547820"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547820"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547820"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547820"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547820"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547820"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547820"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547820"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-547820"

                                                
                                                
----------------------- debugLogs end: kubenet-547820 [took: 3.334967442s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-547820" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-547820
--- SKIP: TestNetworkPlugins/group/kubenet (3.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-547820 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-547820

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-547820

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-547820

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-547820

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-547820

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-547820

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-547820

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-547820

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-547820

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-547820

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547820"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547820"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547820"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-547820

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547820"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547820"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-547820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-547820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-547820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-547820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-547820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-547820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-547820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-547820" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547820"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547820"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547820"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547820"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547820"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-547820

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-547820

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-547820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-547820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-547820

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-547820

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-547820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-547820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-547820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-547820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-547820" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547820"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547820"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547820"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547820"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547820"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18644-354432/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 15 Apr 2024 12:15:33 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: cluster_info
server: https://192.168.39.39:8443
name: offline-containerd-248594
contexts:
- context:
cluster: offline-containerd-248594
extensions:
- extension:
last-update: Mon, 15 Apr 2024 12:15:33 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: context_info
namespace: default
user: offline-containerd-248594
name: offline-containerd-248594
current-context: ""
kind: Config
preferences: {}
users:
- name: offline-containerd-248594
user:
client-certificate: /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/offline-containerd-248594/client.crt
client-key: /home/jenkins/minikube-integration/18644-354432/.minikube/profiles/offline-containerd-248594/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-547820

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547820"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547820"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547820"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547820"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547820"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547820"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547820"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547820"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547820"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547820"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547820"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547820"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547820"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547820"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547820"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547820"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547820"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-547820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-547820"

                                                
                                                
----------------------- debugLogs end: cilium-547820 [took: 6.126194227s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-547820" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-547820
--- SKIP: TestNetworkPlugins/group/cilium (6.30s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-170186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-170186
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
Copied to clipboard