Test Report: KVM_Linux_containerd 18585

                    
                      649852bcd007960ac9edddddae8235c4914b1566:2024-04-08:33941
                    
                

Test fail (1/333)

Order failed test Duration
44 TestAddons/parallel/CSI 60.34
x
+
TestAddons/parallel/CSI (60.34s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 34.752454ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-647801 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-647801 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a1be5862-dabb-44e4-b506-96026aded608] Pending
helpers_test.go:344: "task-pv-pod" [a1be5862-dabb-44e4-b506-96026aded608] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [a1be5862-dabb-44e4-b506-96026aded608] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.007201948s
addons_test.go:584: (dbg) Run:  kubectl --context addons-647801 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-647801 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-647801 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-647801 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-647801 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-647801 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-647801 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [04d03b2c-7e75-4fea-9eda-90b73f4dd813] Pending
helpers_test.go:344: "task-pv-pod-restore" [04d03b2c-7e75-4fea-9eda-90b73f4dd813] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [04d03b2c-7e75-4fea-9eda-90b73f4dd813] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005079731s
addons_test.go:626: (dbg) Run:  kubectl --context addons-647801 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-647801 delete pod task-pv-pod-restore: (1.646286686s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-647801 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-647801 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-647801 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-647801 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (284.388409ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 18:21:15.377322  621294 out.go:291] Setting OutFile to fd 1 ...
	I0408 18:21:15.377458  621294 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 18:21:15.377468  621294 out.go:304] Setting ErrFile to fd 2...
	I0408 18:21:15.377473  621294 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 18:21:15.377681  621294 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18585-610499/.minikube/bin
	I0408 18:21:15.377966  621294 mustload.go:65] Loading cluster: addons-647801
	I0408 18:21:15.378337  621294 config.go:182] Loaded profile config "addons-647801": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 18:21:15.378360  621294 addons.go:597] checking whether the cluster is paused
	I0408 18:21:15.378468  621294 config.go:182] Loaded profile config "addons-647801": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 18:21:15.378483  621294 host.go:66] Checking if "addons-647801" exists ...
	I0408 18:21:15.378855  621294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:21:15.378905  621294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:21:15.393865  621294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41315
	I0408 18:21:15.394383  621294 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:21:15.395022  621294 main.go:141] libmachine: Using API Version  1
	I0408 18:21:15.395053  621294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:21:15.395426  621294 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:21:15.395649  621294 main.go:141] libmachine: (addons-647801) Calling .GetState
	I0408 18:21:15.397268  621294 main.go:141] libmachine: (addons-647801) Calling .DriverName
	I0408 18:21:15.397496  621294 ssh_runner.go:195] Run: systemctl --version
	I0408 18:21:15.397523  621294 main.go:141] libmachine: (addons-647801) Calling .GetSSHHostname
	I0408 18:21:15.399786  621294 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:21:15.400149  621294 main.go:141] libmachine: (addons-647801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:16:c0", ip: ""} in network mk-addons-647801: {Iface:virbr1 ExpiryTime:2024-04-08 19:18:12 +0000 UTC Type:0 Mac:52:54:00:33:16:c0 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-647801 Clientid:01:52:54:00:33:16:c0}
	I0408 18:21:15.400240  621294 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined IP address 192.168.39.113 and MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:21:15.400364  621294 main.go:141] libmachine: (addons-647801) Calling .GetSSHPort
	I0408 18:21:15.400554  621294 main.go:141] libmachine: (addons-647801) Calling .GetSSHKeyPath
	I0408 18:21:15.400720  621294 main.go:141] libmachine: (addons-647801) Calling .GetSSHUsername
	I0408 18:21:15.400902  621294 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18585-610499/.minikube/machines/addons-647801/id_rsa Username:docker}
	I0408 18:21:15.479464  621294 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0408 18:21:15.479569  621294 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 18:21:15.530466  621294 cri.go:89] found id: "5be475697aa38e5558ff444148504cda89dbcc83c133c16af227d12c5a2f95c7"
	I0408 18:21:15.530493  621294 cri.go:89] found id: "bb0620fb426f9152bc05689bb49c97e91db177342efb72b7001cf442a3ae56b3"
	I0408 18:21:15.530498  621294 cri.go:89] found id: "46b166f0b98ca1a736522b8c40efbdc91547e8ff139763d10abd53b58147ede9"
	I0408 18:21:15.530501  621294 cri.go:89] found id: "097f217422be22fb9656c89db9296093ed5c7398bec65e293dabd08d417ce245"
	I0408 18:21:15.530504  621294 cri.go:89] found id: "fd509cf43718f482c32bd0ed2161d4d57112f70b34015402b20ee315b567a4a8"
	I0408 18:21:15.530509  621294 cri.go:89] found id: "aa2730d8afe995b575592476add9ef6376599219a6c4c8383db4bfe6d69b23c5"
	I0408 18:21:15.530511  621294 cri.go:89] found id: "7978baf17c4f5dffc10565cc5c098609997b5d2506cfba9e3f0381e588face1b"
	I0408 18:21:15.530514  621294 cri.go:89] found id: "a77f970717cc99b441dfb0f9e35dafc8e82397ba5957dc17c93f9e36a7764a99"
	I0408 18:21:15.530516  621294 cri.go:89] found id: "65d9d55c75aef7e7ce6cd33287b8945d755ca57decdb2fb8b63bc25590d00269"
	I0408 18:21:15.530523  621294 cri.go:89] found id: "aac47f897e228db0da34b80f0c53a626665f15859ef90b8e6e31c17544dd97a8"
	I0408 18:21:15.530527  621294 cri.go:89] found id: "5a9344b76c925054a5240512a4b175974bcc65f09a2f7be080b62dd9e8fb9add"
	I0408 18:21:15.530530  621294 cri.go:89] found id: "ca72a2f3bf56e38d8fe36b365bdd6363f8e10c0d7bccbcddaa3585343e588d52"
	I0408 18:21:15.530534  621294 cri.go:89] found id: "c08238c3eed00412eff1d4b053a70ec85947597e1b2d10546f99da7c84f96ec1"
	I0408 18:21:15.530539  621294 cri.go:89] found id: "854157b3fcf90489a3f132ca17bf56449cfc83c0415c4463bc239bfb814ff6d0"
	I0408 18:21:15.530544  621294 cri.go:89] found id: "ea9620ff06d1d05c48aae97cd2060563062d38a48fc5303805da97b280102963"
	I0408 18:21:15.530547  621294 cri.go:89] found id: "affe99c11d9b3ad743bd6071f7fa32625961284e681e5157a2c17556d87bc0d9"
	I0408 18:21:15.530552  621294 cri.go:89] found id: "e0890d0ccaa9f2587853edc7f1efc2c3ef53f278595fa66adcae354a446c0e6f"
	I0408 18:21:15.530557  621294 cri.go:89] found id: ""
	I0408 18:21:15.530604  621294 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0408 18:21:15.586057  621294 main.go:141] libmachine: Making call to close driver server
	I0408 18:21:15.586083  621294 main.go:141] libmachine: (addons-647801) Calling .Close
	I0408 18:21:15.586425  621294 main.go:141] libmachine: (addons-647801) DBG | Closing plugin on server side
	I0408 18:21:15.586427  621294 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:21:15.586459  621294 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:21:15.589348  621294 out.go:177] 
	W0408 18:21:15.591026  621294 out.go:239] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-08T18:21:15Z" level=error msg="stat /run/containerd/runc/k8s.io/b6e9a9a2a743dec61a1d98849f18c4f5a0e7ff82be76ba7a4c27134f53377ea2: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-08T18:21:15Z" level=error msg="stat /run/containerd/runc/k8s.io/b6e9a9a2a743dec61a1d98849f18c4f5a0e7ff82be76ba7a4c27134f53377ea2: no such file or directory"
	
	W0408 18:21:15.591048  621294 out.go:239] * 
	* 
	W0408 18:21:15.594578  621294 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 18:21:15.596023  621294 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:640: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-647801 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-647801 addons disable volumesnapshots --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-647801 -n addons-647801
helpers_test.go:244: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-647801 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-647801 logs -n 25: (1.476621164s)
helpers_test.go:252: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p download-only-749213                                                                     | download-only-749213 | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:17 UTC | 08 Apr 24 18:17 UTC |
	| delete  | -p download-only-801401                                                                     | download-only-801401 | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:17 UTC | 08 Apr 24 18:17 UTC |
	| delete  | -p download-only-114584                                                                     | download-only-114584 | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:17 UTC | 08 Apr 24 18:17 UTC |
	| delete  | -p download-only-749213                                                                     | download-only-749213 | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:17 UTC | 08 Apr 24 18:17 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-805915 | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:17 UTC |                     |
	|         | binary-mirror-805915                                                                        |                      |         |                |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |                |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |                |                     |                     |
	|         | http://127.0.0.1:45805                                                                      |                      |         |                |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |                |                     |                     |
	| delete  | -p binary-mirror-805915                                                                     | binary-mirror-805915 | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:17 UTC | 08 Apr 24 18:17 UTC |
	| addons  | disable dashboard -p                                                                        | addons-647801        | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:17 UTC |                     |
	|         | addons-647801                                                                               |                      |         |                |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-647801        | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:17 UTC |                     |
	|         | addons-647801                                                                               |                      |         |                |                     |                     |
	| start   | -p addons-647801 --wait=true                                                                | addons-647801        | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:17 UTC | 08 Apr 24 18:20 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |                |                     |                     |
	|         | --addons=registry                                                                           |                      |         |                |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |                |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |                |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |                |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |                |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |                |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |                |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |                |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |                |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |                |                     |                     |
	| addons  | addons-647801 addons                                                                        | addons-647801        | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:20 UTC | 08 Apr 24 18:20 UTC |
	|         | disable metrics-server                                                                      |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| ip      | addons-647801 ip                                                                            | addons-647801        | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:20 UTC | 08 Apr 24 18:20 UTC |
	| addons  | addons-647801 addons disable                                                                | addons-647801        | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:20 UTC | 08 Apr 24 18:20 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| addons  | addons-647801 addons disable                                                                | addons-647801        | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:20 UTC | 08 Apr 24 18:20 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-647801        | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:20 UTC | 08 Apr 24 18:20 UTC |
	|         | addons-647801                                                                               |                      |         |                |                     |                     |
	| ssh     | addons-647801 ssh curl -s                                                                   | addons-647801        | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:20 UTC | 08 Apr 24 18:20 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |                |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |                |                     |                     |
	| ip      | addons-647801 ip                                                                            | addons-647801        | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:20 UTC | 08 Apr 24 18:20 UTC |
	| addons  | addons-647801 addons disable                                                                | addons-647801        | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:20 UTC | 08 Apr 24 18:20 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| ssh     | addons-647801 ssh cat                                                                       | addons-647801        | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:20 UTC | 08 Apr 24 18:20 UTC |
	|         | /opt/local-path-provisioner/pvc-98f67a28-2944-4b09-a5b8-08ff2d55447a_default_test-pvc/file1 |                      |         |                |                     |                     |
	| addons  | addons-647801 addons disable                                                                | addons-647801        | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:20 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | addons-647801 addons disable                                                                | addons-647801        | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:20 UTC | 08 Apr 24 18:20 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |                |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-647801        | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:20 UTC | 08 Apr 24 18:20 UTC |
	|         | -p addons-647801                                                                            |                      |         |                |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-647801        | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:20 UTC | 08 Apr 24 18:20 UTC |
	|         | addons-647801                                                                               |                      |         |                |                     |                     |
	| addons  | enable headlamp                                                                             | addons-647801        | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:20 UTC | 08 Apr 24 18:20 UTC |
	|         | -p addons-647801                                                                            |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | addons-647801 addons                                                                        | addons-647801        | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:21 UTC |                     |
	|         | disable csi-hostpath-driver                                                                 |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | addons-647801 addons                                                                        | addons-647801        | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:21 UTC | 08 Apr 24 18:21 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 18:17:56
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 18:17:56.444917  619015 out.go:291] Setting OutFile to fd 1 ...
	I0408 18:17:56.445043  619015 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 18:17:56.445053  619015 out.go:304] Setting ErrFile to fd 2...
	I0408 18:17:56.445057  619015 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 18:17:56.445265  619015 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18585-610499/.minikube/bin
	I0408 18:17:56.445937  619015 out.go:298] Setting JSON to false
	I0408 18:17:56.446857  619015 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7227,"bootTime":1712593049,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 18:17:56.446927  619015 start.go:139] virtualization: kvm guest
	I0408 18:17:56.449481  619015 out.go:177] * [addons-647801] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 18:17:56.451392  619015 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 18:17:56.451352  619015 notify.go:220] Checking for updates...
	I0408 18:17:56.452844  619015 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 18:17:56.454213  619015 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18585-610499/kubeconfig
	I0408 18:17:56.455791  619015 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18585-610499/.minikube
	I0408 18:17:56.457431  619015 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 18:17:56.459039  619015 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 18:17:56.460865  619015 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 18:17:56.493630  619015 out.go:177] * Using the kvm2 driver based on user configuration
	I0408 18:17:56.495084  619015 start.go:297] selected driver: kvm2
	I0408 18:17:56.495103  619015 start.go:901] validating driver "kvm2" against <nil>
	I0408 18:17:56.495116  619015 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 18:17:56.495847  619015 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 18:17:56.495930  619015 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18585-610499/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 18:17:56.511684  619015 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0408 18:17:56.511746  619015 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 18:17:56.511977  619015 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 18:17:56.512039  619015 cni.go:84] Creating CNI manager for ""
	I0408 18:17:56.512052  619015 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0408 18:17:56.512059  619015 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 18:17:56.512111  619015 start.go:340] cluster config:
	{Name:addons-647801 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-647801 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
ontainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 18:17:56.512219  619015 iso.go:125] acquiring lock: {Name:mk6be88515b11e528d76386559642c5a6b85b7f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 18:17:56.514139  619015 out.go:177] * Starting "addons-647801" primary control-plane node in "addons-647801" cluster
	I0408 18:17:56.515495  619015 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0408 18:17:56.515574  619015 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18585-610499/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4
	I0408 18:17:56.515591  619015 cache.go:56] Caching tarball of preloaded images
	I0408 18:17:56.515668  619015 preload.go:173] Found /home/jenkins/minikube-integration/18585-610499/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0408 18:17:56.515679  619015 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on containerd
	I0408 18:17:56.515974  619015 profile.go:143] Saving config to /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/config.json ...
	I0408 18:17:56.515995  619015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/config.json: {Name:mk6aa751fc73a82596041d88aca2b764fba20d5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:17:56.516139  619015 start.go:360] acquireMachinesLock for addons-647801: {Name:mkf11ac381de099daefdb1db1a82d1812a2f5a10 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 18:17:56.516184  619015 start.go:364] duration metric: took 31.283µs to acquireMachinesLock for "addons-647801"
	I0408 18:17:56.516206  619015 start.go:93] Provisioning new machine with config: &{Name:addons-647801 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:addons-647801 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0408 18:17:56.516268  619015 start.go:125] createHost starting for "" (driver="kvm2")
	I0408 18:17:56.518210  619015 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0408 18:17:56.518369  619015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:17:56.518408  619015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:17:56.533125  619015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34165
	I0408 18:17:56.533637  619015 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:17:56.534288  619015 main.go:141] libmachine: Using API Version  1
	I0408 18:17:56.534312  619015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:17:56.534701  619015 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:17:56.534913  619015 main.go:141] libmachine: (addons-647801) Calling .GetMachineName
	I0408 18:17:56.535080  619015 main.go:141] libmachine: (addons-647801) Calling .DriverName
	I0408 18:17:56.535265  619015 start.go:159] libmachine.API.Create for "addons-647801" (driver="kvm2")
	I0408 18:17:56.535295  619015 client.go:168] LocalClient.Create starting
	I0408 18:17:56.535332  619015 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18585-610499/.minikube/certs/ca.pem
	I0408 18:17:56.714640  619015 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18585-610499/.minikube/certs/cert.pem
	I0408 18:17:56.777889  619015 main.go:141] libmachine: Running pre-create checks...
	I0408 18:17:56.777921  619015 main.go:141] libmachine: (addons-647801) Calling .PreCreateCheck
	I0408 18:17:56.778563  619015 main.go:141] libmachine: (addons-647801) Calling .GetConfigRaw
	I0408 18:17:56.779074  619015 main.go:141] libmachine: Creating machine...
	I0408 18:17:56.779092  619015 main.go:141] libmachine: (addons-647801) Calling .Create
	I0408 18:17:56.779310  619015 main.go:141] libmachine: (addons-647801) Creating KVM machine...
	I0408 18:17:56.780925  619015 main.go:141] libmachine: (addons-647801) DBG | found existing default KVM network
	I0408 18:17:56.781863  619015 main.go:141] libmachine: (addons-647801) DBG | I0408 18:17:56.781710  619037 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015c30}
	I0408 18:17:56.781884  619015 main.go:141] libmachine: (addons-647801) DBG | created network xml: 
	I0408 18:17:56.781902  619015 main.go:141] libmachine: (addons-647801) DBG | <network>
	I0408 18:17:56.781917  619015 main.go:141] libmachine: (addons-647801) DBG |   <name>mk-addons-647801</name>
	I0408 18:17:56.781928  619015 main.go:141] libmachine: (addons-647801) DBG |   <dns enable='no'/>
	I0408 18:17:56.781945  619015 main.go:141] libmachine: (addons-647801) DBG |   
	I0408 18:17:56.781955  619015 main.go:141] libmachine: (addons-647801) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0408 18:17:56.781973  619015 main.go:141] libmachine: (addons-647801) DBG |     <dhcp>
	I0408 18:17:56.781989  619015 main.go:141] libmachine: (addons-647801) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0408 18:17:56.782001  619015 main.go:141] libmachine: (addons-647801) DBG |     </dhcp>
	I0408 18:17:56.782010  619015 main.go:141] libmachine: (addons-647801) DBG |   </ip>
	I0408 18:17:56.782018  619015 main.go:141] libmachine: (addons-647801) DBG |   
	I0408 18:17:56.782024  619015 main.go:141] libmachine: (addons-647801) DBG | </network>
	I0408 18:17:56.782032  619015 main.go:141] libmachine: (addons-647801) DBG | 
	I0408 18:17:56.787402  619015 main.go:141] libmachine: (addons-647801) DBG | trying to create private KVM network mk-addons-647801 192.168.39.0/24...
	I0408 18:17:56.856680  619015 main.go:141] libmachine: (addons-647801) DBG | private KVM network mk-addons-647801 192.168.39.0/24 created
	I0408 18:17:56.856711  619015 main.go:141] libmachine: (addons-647801) DBG | I0408 18:17:56.856636  619037 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18585-610499/.minikube
	I0408 18:17:56.856752  619015 main.go:141] libmachine: (addons-647801) Setting up store path in /home/jenkins/minikube-integration/18585-610499/.minikube/machines/addons-647801 ...
	I0408 18:17:56.856779  619015 main.go:141] libmachine: (addons-647801) Building disk image from file:///home/jenkins/minikube-integration/18585-610499/.minikube/cache/iso/amd64/minikube-v1.33.0-1712570768-18585-amd64.iso
	I0408 18:17:56.856986  619015 main.go:141] libmachine: (addons-647801) Downloading /home/jenkins/minikube-integration/18585-610499/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18585-610499/.minikube/cache/iso/amd64/minikube-v1.33.0-1712570768-18585-amd64.iso...
	I0408 18:17:57.095188  619015 main.go:141] libmachine: (addons-647801) DBG | I0408 18:17:57.094996  619037 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18585-610499/.minikube/machines/addons-647801/id_rsa...
	I0408 18:17:57.175484  619015 main.go:141] libmachine: (addons-647801) DBG | I0408 18:17:57.175310  619037 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18585-610499/.minikube/machines/addons-647801/addons-647801.rawdisk...
	I0408 18:17:57.175562  619015 main.go:141] libmachine: (addons-647801) DBG | Writing magic tar header
	I0408 18:17:57.175580  619015 main.go:141] libmachine: (addons-647801) DBG | Writing SSH key tar header
	I0408 18:17:57.175594  619015 main.go:141] libmachine: (addons-647801) Setting executable bit set on /home/jenkins/minikube-integration/18585-610499/.minikube/machines/addons-647801 (perms=drwx------)
	I0408 18:17:57.175609  619015 main.go:141] libmachine: (addons-647801) DBG | I0408 18:17:57.175436  619037 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18585-610499/.minikube/machines/addons-647801 ...
	I0408 18:17:57.175628  619015 main.go:141] libmachine: (addons-647801) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18585-610499/.minikube/machines/addons-647801
	I0408 18:17:57.175639  619015 main.go:141] libmachine: (addons-647801) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18585-610499/.minikube/machines
	I0408 18:17:57.175652  619015 main.go:141] libmachine: (addons-647801) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18585-610499/.minikube
	I0408 18:17:57.175682  619015 main.go:141] libmachine: (addons-647801) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18585-610499
	I0408 18:17:57.175693  619015 main.go:141] libmachine: (addons-647801) Setting executable bit set on /home/jenkins/minikube-integration/18585-610499/.minikube/machines (perms=drwxr-xr-x)
	I0408 18:17:57.175707  619015 main.go:141] libmachine: (addons-647801) Setting executable bit set on /home/jenkins/minikube-integration/18585-610499/.minikube (perms=drwxr-xr-x)
	I0408 18:17:57.175717  619015 main.go:141] libmachine: (addons-647801) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0408 18:17:57.175732  619015 main.go:141] libmachine: (addons-647801) Setting executable bit set on /home/jenkins/minikube-integration/18585-610499 (perms=drwxrwxr-x)
	I0408 18:17:57.175746  619015 main.go:141] libmachine: (addons-647801) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0408 18:17:57.175761  619015 main.go:141] libmachine: (addons-647801) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0408 18:17:57.175794  619015 main.go:141] libmachine: (addons-647801) Creating domain...
	I0408 18:17:57.175812  619015 main.go:141] libmachine: (addons-647801) DBG | Checking permissions on dir: /home/jenkins
	I0408 18:17:57.175822  619015 main.go:141] libmachine: (addons-647801) DBG | Checking permissions on dir: /home
	I0408 18:17:57.175837  619015 main.go:141] libmachine: (addons-647801) DBG | Skipping /home - not owner
	I0408 18:17:57.176917  619015 main.go:141] libmachine: (addons-647801) define libvirt domain using xml: 
	I0408 18:17:57.176959  619015 main.go:141] libmachine: (addons-647801) <domain type='kvm'>
	I0408 18:17:57.176983  619015 main.go:141] libmachine: (addons-647801)   <name>addons-647801</name>
	I0408 18:17:57.177001  619015 main.go:141] libmachine: (addons-647801)   <memory unit='MiB'>4000</memory>
	I0408 18:17:57.177027  619015 main.go:141] libmachine: (addons-647801)   <vcpu>2</vcpu>
	I0408 18:17:57.177041  619015 main.go:141] libmachine: (addons-647801)   <features>
	I0408 18:17:57.177047  619015 main.go:141] libmachine: (addons-647801)     <acpi/>
	I0408 18:17:57.177052  619015 main.go:141] libmachine: (addons-647801)     <apic/>
	I0408 18:17:57.177061  619015 main.go:141] libmachine: (addons-647801)     <pae/>
	I0408 18:17:57.177067  619015 main.go:141] libmachine: (addons-647801)     
	I0408 18:17:57.177075  619015 main.go:141] libmachine: (addons-647801)   </features>
	I0408 18:17:57.177084  619015 main.go:141] libmachine: (addons-647801)   <cpu mode='host-passthrough'>
	I0408 18:17:57.177092  619015 main.go:141] libmachine: (addons-647801)   
	I0408 18:17:57.177098  619015 main.go:141] libmachine: (addons-647801)   </cpu>
	I0408 18:17:57.177107  619015 main.go:141] libmachine: (addons-647801)   <os>
	I0408 18:17:57.177115  619015 main.go:141] libmachine: (addons-647801)     <type>hvm</type>
	I0408 18:17:57.177154  619015 main.go:141] libmachine: (addons-647801)     <boot dev='cdrom'/>
	I0408 18:17:57.177183  619015 main.go:141] libmachine: (addons-647801)     <boot dev='hd'/>
	I0408 18:17:57.177202  619015 main.go:141] libmachine: (addons-647801)     <bootmenu enable='no'/>
	I0408 18:17:57.177220  619015 main.go:141] libmachine: (addons-647801)   </os>
	I0408 18:17:57.177235  619015 main.go:141] libmachine: (addons-647801)   <devices>
	I0408 18:17:57.177248  619015 main.go:141] libmachine: (addons-647801)     <disk type='file' device='cdrom'>
	I0408 18:17:57.177268  619015 main.go:141] libmachine: (addons-647801)       <source file='/home/jenkins/minikube-integration/18585-610499/.minikube/machines/addons-647801/boot2docker.iso'/>
	I0408 18:17:57.177280  619015 main.go:141] libmachine: (addons-647801)       <target dev='hdc' bus='scsi'/>
	I0408 18:17:57.177290  619015 main.go:141] libmachine: (addons-647801)       <readonly/>
	I0408 18:17:57.177329  619015 main.go:141] libmachine: (addons-647801)     </disk>
	I0408 18:17:57.177345  619015 main.go:141] libmachine: (addons-647801)     <disk type='file' device='disk'>
	I0408 18:17:57.177360  619015 main.go:141] libmachine: (addons-647801)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0408 18:17:57.177385  619015 main.go:141] libmachine: (addons-647801)       <source file='/home/jenkins/minikube-integration/18585-610499/.minikube/machines/addons-647801/addons-647801.rawdisk'/>
	I0408 18:17:57.177403  619015 main.go:141] libmachine: (addons-647801)       <target dev='hda' bus='virtio'/>
	I0408 18:17:57.177443  619015 main.go:141] libmachine: (addons-647801)     </disk>
	I0408 18:17:57.177456  619015 main.go:141] libmachine: (addons-647801)     <interface type='network'>
	I0408 18:17:57.177479  619015 main.go:141] libmachine: (addons-647801)       <source network='mk-addons-647801'/>
	I0408 18:17:57.177498  619015 main.go:141] libmachine: (addons-647801)       <model type='virtio'/>
	I0408 18:17:57.177513  619015 main.go:141] libmachine: (addons-647801)     </interface>
	I0408 18:17:57.177525  619015 main.go:141] libmachine: (addons-647801)     <interface type='network'>
	I0408 18:17:57.177537  619015 main.go:141] libmachine: (addons-647801)       <source network='default'/>
	I0408 18:17:57.177547  619015 main.go:141] libmachine: (addons-647801)       <model type='virtio'/>
	I0408 18:17:57.177558  619015 main.go:141] libmachine: (addons-647801)     </interface>
	I0408 18:17:57.177570  619015 main.go:141] libmachine: (addons-647801)     <serial type='pty'>
	I0408 18:17:57.177586  619015 main.go:141] libmachine: (addons-647801)       <target port='0'/>
	I0408 18:17:57.177596  619015 main.go:141] libmachine: (addons-647801)     </serial>
	I0408 18:17:57.177615  619015 main.go:141] libmachine: (addons-647801)     <console type='pty'>
	I0408 18:17:57.177637  619015 main.go:141] libmachine: (addons-647801)       <target type='serial' port='0'/>
	I0408 18:17:57.177650  619015 main.go:141] libmachine: (addons-647801)     </console>
	I0408 18:17:57.177665  619015 main.go:141] libmachine: (addons-647801)     <rng model='virtio'>
	I0408 18:17:57.177678  619015 main.go:141] libmachine: (addons-647801)       <backend model='random'>/dev/random</backend>
	I0408 18:17:57.177689  619015 main.go:141] libmachine: (addons-647801)     </rng>
	I0408 18:17:57.177700  619015 main.go:141] libmachine: (addons-647801)     
	I0408 18:17:57.177713  619015 main.go:141] libmachine: (addons-647801)     
	I0408 18:17:57.177729  619015 main.go:141] libmachine: (addons-647801)   </devices>
	I0408 18:17:57.177744  619015 main.go:141] libmachine: (addons-647801) </domain>
	I0408 18:17:57.177755  619015 main.go:141] libmachine: (addons-647801) 
	I0408 18:17:57.181988  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:db:42:ff in network default
	I0408 18:17:57.182585  619015 main.go:141] libmachine: (addons-647801) Ensuring networks are active...
	I0408 18:17:57.182605  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:17:57.183276  619015 main.go:141] libmachine: (addons-647801) Ensuring network default is active
	I0408 18:17:57.183559  619015 main.go:141] libmachine: (addons-647801) Ensuring network mk-addons-647801 is active
	I0408 18:17:57.184019  619015 main.go:141] libmachine: (addons-647801) Getting domain xml...
	I0408 18:17:57.184638  619015 main.go:141] libmachine: (addons-647801) Creating domain...
	I0408 18:17:58.383741  619015 main.go:141] libmachine: (addons-647801) Waiting to get IP...
	I0408 18:17:58.384555  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:17:58.384930  619015 main.go:141] libmachine: (addons-647801) DBG | unable to find current IP address of domain addons-647801 in network mk-addons-647801
	I0408 18:17:58.384948  619015 main.go:141] libmachine: (addons-647801) DBG | I0408 18:17:58.384924  619037 retry.go:31] will retry after 285.455337ms: waiting for machine to come up
	I0408 18:17:58.672528  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:17:58.672985  619015 main.go:141] libmachine: (addons-647801) DBG | unable to find current IP address of domain addons-647801 in network mk-addons-647801
	I0408 18:17:58.673016  619015 main.go:141] libmachine: (addons-647801) DBG | I0408 18:17:58.672931  619037 retry.go:31] will retry after 328.141177ms: waiting for machine to come up
	I0408 18:17:59.002358  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:17:59.002825  619015 main.go:141] libmachine: (addons-647801) DBG | unable to find current IP address of domain addons-647801 in network mk-addons-647801
	I0408 18:17:59.002863  619015 main.go:141] libmachine: (addons-647801) DBG | I0408 18:17:59.002783  619037 retry.go:31] will retry after 326.241937ms: waiting for machine to come up
	I0408 18:17:59.330359  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:17:59.330867  619015 main.go:141] libmachine: (addons-647801) DBG | unable to find current IP address of domain addons-647801 in network mk-addons-647801
	I0408 18:17:59.330898  619015 main.go:141] libmachine: (addons-647801) DBG | I0408 18:17:59.330815  619037 retry.go:31] will retry after 368.846932ms: waiting for machine to come up
	I0408 18:17:59.701539  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:17:59.701960  619015 main.go:141] libmachine: (addons-647801) DBG | unable to find current IP address of domain addons-647801 in network mk-addons-647801
	I0408 18:17:59.701993  619015 main.go:141] libmachine: (addons-647801) DBG | I0408 18:17:59.701906  619037 retry.go:31] will retry after 684.75793ms: waiting for machine to come up
	I0408 18:18:00.387951  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:00.388336  619015 main.go:141] libmachine: (addons-647801) DBG | unable to find current IP address of domain addons-647801 in network mk-addons-647801
	I0408 18:18:00.388359  619015 main.go:141] libmachine: (addons-647801) DBG | I0408 18:18:00.388293  619037 retry.go:31] will retry after 684.895609ms: waiting for machine to come up
	I0408 18:18:01.075164  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:01.075590  619015 main.go:141] libmachine: (addons-647801) DBG | unable to find current IP address of domain addons-647801 in network mk-addons-647801
	I0408 18:18:01.075612  619015 main.go:141] libmachine: (addons-647801) DBG | I0408 18:18:01.075555  619037 retry.go:31] will retry after 816.954444ms: waiting for machine to come up
	I0408 18:18:01.894391  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:01.894976  619015 main.go:141] libmachine: (addons-647801) DBG | unable to find current IP address of domain addons-647801 in network mk-addons-647801
	I0408 18:18:01.895025  619015 main.go:141] libmachine: (addons-647801) DBG | I0408 18:18:01.894926  619037 retry.go:31] will retry after 960.50853ms: waiting for machine to come up
	I0408 18:18:02.857149  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:02.857531  619015 main.go:141] libmachine: (addons-647801) DBG | unable to find current IP address of domain addons-647801 in network mk-addons-647801
	I0408 18:18:02.857556  619015 main.go:141] libmachine: (addons-647801) DBG | I0408 18:18:02.857490  619037 retry.go:31] will retry after 1.619982815s: waiting for machine to come up
	I0408 18:18:04.479503  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:04.479952  619015 main.go:141] libmachine: (addons-647801) DBG | unable to find current IP address of domain addons-647801 in network mk-addons-647801
	I0408 18:18:04.480020  619015 main.go:141] libmachine: (addons-647801) DBG | I0408 18:18:04.479919  619037 retry.go:31] will retry after 2.183973005s: waiting for machine to come up
	I0408 18:18:06.665501  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:06.666045  619015 main.go:141] libmachine: (addons-647801) DBG | unable to find current IP address of domain addons-647801 in network mk-addons-647801
	I0408 18:18:06.666074  619015 main.go:141] libmachine: (addons-647801) DBG | I0408 18:18:06.666007  619037 retry.go:31] will retry after 2.48932162s: waiting for machine to come up
	I0408 18:18:09.158819  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:09.159260  619015 main.go:141] libmachine: (addons-647801) DBG | unable to find current IP address of domain addons-647801 in network mk-addons-647801
	I0408 18:18:09.159292  619015 main.go:141] libmachine: (addons-647801) DBG | I0408 18:18:09.159234  619037 retry.go:31] will retry after 2.935515866s: waiting for machine to come up
	I0408 18:18:12.096113  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:12.096703  619015 main.go:141] libmachine: (addons-647801) DBG | unable to find current IP address of domain addons-647801 in network mk-addons-647801
	I0408 18:18:12.096739  619015 main.go:141] libmachine: (addons-647801) DBG | I0408 18:18:12.096652  619037 retry.go:31] will retry after 3.417365906s: waiting for machine to come up
	I0408 18:18:15.518492  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:15.519130  619015 main.go:141] libmachine: (addons-647801) DBG | unable to find current IP address of domain addons-647801 in network mk-addons-647801
	I0408 18:18:15.519161  619015 main.go:141] libmachine: (addons-647801) DBG | I0408 18:18:15.519077  619037 retry.go:31] will retry after 4.798993858s: waiting for machine to come up
	I0408 18:18:20.322279  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:20.322801  619015 main.go:141] libmachine: (addons-647801) Found IP for machine: 192.168.39.113
	I0408 18:18:20.322831  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has current primary IP address 192.168.39.113 and MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:20.322840  619015 main.go:141] libmachine: (addons-647801) Reserving static IP address...
	I0408 18:18:20.323364  619015 main.go:141] libmachine: (addons-647801) DBG | unable to find host DHCP lease matching {name: "addons-647801", mac: "52:54:00:33:16:c0", ip: "192.168.39.113"} in network mk-addons-647801
	I0408 18:18:20.402306  619015 main.go:141] libmachine: (addons-647801) Reserved static IP address: 192.168.39.113
	I0408 18:18:20.402341  619015 main.go:141] libmachine: (addons-647801) Waiting for SSH to be available...
	I0408 18:18:20.402350  619015 main.go:141] libmachine: (addons-647801) DBG | Getting to WaitForSSH function...
	I0408 18:18:20.405135  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:20.405469  619015 main.go:141] libmachine: (addons-647801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:16:c0", ip: ""} in network mk-addons-647801: {Iface:virbr1 ExpiryTime:2024-04-08 19:18:12 +0000 UTC Type:0 Mac:52:54:00:33:16:c0 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:minikube Clientid:01:52:54:00:33:16:c0}
	I0408 18:18:20.405497  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined IP address 192.168.39.113 and MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:20.405675  619015 main.go:141] libmachine: (addons-647801) DBG | Using SSH client type: external
	I0408 18:18:20.405718  619015 main.go:141] libmachine: (addons-647801) DBG | Using SSH private key: /home/jenkins/minikube-integration/18585-610499/.minikube/machines/addons-647801/id_rsa (-rw-------)
	I0408 18:18:20.405750  619015 main.go:141] libmachine: (addons-647801) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.113 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18585-610499/.minikube/machines/addons-647801/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 18:18:20.405776  619015 main.go:141] libmachine: (addons-647801) DBG | About to run SSH command:
	I0408 18:18:20.405792  619015 main.go:141] libmachine: (addons-647801) DBG | exit 0
	I0408 18:18:20.528347  619015 main.go:141] libmachine: (addons-647801) DBG | SSH cmd err, output: <nil>: 
	I0408 18:18:20.528631  619015 main.go:141] libmachine: (addons-647801) KVM machine creation complete!
	I0408 18:18:20.528915  619015 main.go:141] libmachine: (addons-647801) Calling .GetConfigRaw
	I0408 18:18:20.529491  619015 main.go:141] libmachine: (addons-647801) Calling .DriverName
	I0408 18:18:20.529704  619015 main.go:141] libmachine: (addons-647801) Calling .DriverName
	I0408 18:18:20.529966  619015 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0408 18:18:20.529984  619015 main.go:141] libmachine: (addons-647801) Calling .GetState
	I0408 18:18:20.531344  619015 main.go:141] libmachine: Detecting operating system of created instance...
	I0408 18:18:20.531363  619015 main.go:141] libmachine: Waiting for SSH to be available...
	I0408 18:18:20.531402  619015 main.go:141] libmachine: Getting to WaitForSSH function...
	I0408 18:18:20.531412  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHHostname
	I0408 18:18:20.534005  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:20.534485  619015 main.go:141] libmachine: (addons-647801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:16:c0", ip: ""} in network mk-addons-647801: {Iface:virbr1 ExpiryTime:2024-04-08 19:18:12 +0000 UTC Type:0 Mac:52:54:00:33:16:c0 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-647801 Clientid:01:52:54:00:33:16:c0}
	I0408 18:18:20.534528  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined IP address 192.168.39.113 and MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:20.534667  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHPort
	I0408 18:18:20.534836  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHKeyPath
	I0408 18:18:20.535057  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHKeyPath
	I0408 18:18:20.535253  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHUsername
	I0408 18:18:20.535432  619015 main.go:141] libmachine: Using SSH client type: native
	I0408 18:18:20.535691  619015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I0408 18:18:20.535704  619015 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0408 18:18:20.635676  619015 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 18:18:20.635706  619015 main.go:141] libmachine: Detecting the provisioner...
	I0408 18:18:20.635718  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHHostname
	I0408 18:18:20.639965  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:20.640322  619015 main.go:141] libmachine: (addons-647801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:16:c0", ip: ""} in network mk-addons-647801: {Iface:virbr1 ExpiryTime:2024-04-08 19:18:12 +0000 UTC Type:0 Mac:52:54:00:33:16:c0 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-647801 Clientid:01:52:54:00:33:16:c0}
	I0408 18:18:20.640352  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined IP address 192.168.39.113 and MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:20.640568  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHPort
	I0408 18:18:20.640838  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHKeyPath
	I0408 18:18:20.641050  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHKeyPath
	I0408 18:18:20.641268  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHUsername
	I0408 18:18:20.641430  619015 main.go:141] libmachine: Using SSH client type: native
	I0408 18:18:20.641622  619015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I0408 18:18:20.641634  619015 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0408 18:18:20.745364  619015 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0408 18:18:20.745464  619015 main.go:141] libmachine: found compatible host: buildroot
	I0408 18:18:20.745480  619015 main.go:141] libmachine: Provisioning with buildroot...
	I0408 18:18:20.745496  619015 main.go:141] libmachine: (addons-647801) Calling .GetMachineName
	I0408 18:18:20.745863  619015 buildroot.go:166] provisioning hostname "addons-647801"
	I0408 18:18:20.745890  619015 main.go:141] libmachine: (addons-647801) Calling .GetMachineName
	I0408 18:18:20.746129  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHHostname
	I0408 18:18:20.748971  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:20.749325  619015 main.go:141] libmachine: (addons-647801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:16:c0", ip: ""} in network mk-addons-647801: {Iface:virbr1 ExpiryTime:2024-04-08 19:18:12 +0000 UTC Type:0 Mac:52:54:00:33:16:c0 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-647801 Clientid:01:52:54:00:33:16:c0}
	I0408 18:18:20.749350  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined IP address 192.168.39.113 and MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:20.749574  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHPort
	I0408 18:18:20.749767  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHKeyPath
	I0408 18:18:20.750004  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHKeyPath
	I0408 18:18:20.750190  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHUsername
	I0408 18:18:20.750354  619015 main.go:141] libmachine: Using SSH client type: native
	I0408 18:18:20.750586  619015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I0408 18:18:20.750602  619015 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-647801 && echo "addons-647801" | sudo tee /etc/hostname
	I0408 18:18:20.867744  619015 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-647801
	
	I0408 18:18:20.867777  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHHostname
	I0408 18:18:20.870542  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:20.870872  619015 main.go:141] libmachine: (addons-647801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:16:c0", ip: ""} in network mk-addons-647801: {Iface:virbr1 ExpiryTime:2024-04-08 19:18:12 +0000 UTC Type:0 Mac:52:54:00:33:16:c0 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-647801 Clientid:01:52:54:00:33:16:c0}
	I0408 18:18:20.870905  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined IP address 192.168.39.113 and MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:20.871048  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHPort
	I0408 18:18:20.871251  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHKeyPath
	I0408 18:18:20.871461  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHKeyPath
	I0408 18:18:20.871671  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHUsername
	I0408 18:18:20.871919  619015 main.go:141] libmachine: Using SSH client type: native
	I0408 18:18:20.872107  619015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I0408 18:18:20.872125  619015 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-647801' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-647801/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-647801' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 18:18:20.983764  619015 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 18:18:20.983850  619015 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18585-610499/.minikube CaCertPath:/home/jenkins/minikube-integration/18585-610499/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18585-610499/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18585-610499/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18585-610499/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18585-610499/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18585-610499/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18585-610499/.minikube}
	I0408 18:18:20.983909  619015 buildroot.go:174] setting up certificates
	I0408 18:18:20.983930  619015 provision.go:84] configureAuth start
	I0408 18:18:20.983947  619015 main.go:141] libmachine: (addons-647801) Calling .GetMachineName
	I0408 18:18:20.984362  619015 main.go:141] libmachine: (addons-647801) Calling .GetIP
	I0408 18:18:20.987307  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:20.987796  619015 main.go:141] libmachine: (addons-647801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:16:c0", ip: ""} in network mk-addons-647801: {Iface:virbr1 ExpiryTime:2024-04-08 19:18:12 +0000 UTC Type:0 Mac:52:54:00:33:16:c0 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-647801 Clientid:01:52:54:00:33:16:c0}
	I0408 18:18:20.987831  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined IP address 192.168.39.113 and MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:20.987961  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHHostname
	I0408 18:18:20.990607  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:20.990970  619015 main.go:141] libmachine: (addons-647801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:16:c0", ip: ""} in network mk-addons-647801: {Iface:virbr1 ExpiryTime:2024-04-08 19:18:12 +0000 UTC Type:0 Mac:52:54:00:33:16:c0 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-647801 Clientid:01:52:54:00:33:16:c0}
	I0408 18:18:20.990998  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined IP address 192.168.39.113 and MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:20.991173  619015 provision.go:143] copyHostCerts
	I0408 18:18:20.991255  619015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18585-610499/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18585-610499/.minikube/ca.pem (1082 bytes)
	I0408 18:18:20.991365  619015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18585-610499/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18585-610499/.minikube/cert.pem (1123 bytes)
	I0408 18:18:20.991423  619015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18585-610499/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18585-610499/.minikube/key.pem (1679 bytes)
	I0408 18:18:20.991463  619015 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18585-610499/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18585-610499/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18585-610499/.minikube/certs/ca-key.pem org=jenkins.addons-647801 san=[127.0.0.1 192.168.39.113 addons-647801 localhost minikube]
	I0408 18:18:21.157374  619015 provision.go:177] copyRemoteCerts
	I0408 18:18:21.157461  619015 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 18:18:21.157494  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHHostname
	I0408 18:18:21.160375  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:21.160784  619015 main.go:141] libmachine: (addons-647801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:16:c0", ip: ""} in network mk-addons-647801: {Iface:virbr1 ExpiryTime:2024-04-08 19:18:12 +0000 UTC Type:0 Mac:52:54:00:33:16:c0 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-647801 Clientid:01:52:54:00:33:16:c0}
	I0408 18:18:21.160820  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined IP address 192.168.39.113 and MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:21.161025  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHPort
	I0408 18:18:21.161248  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHKeyPath
	I0408 18:18:21.161401  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHUsername
	I0408 18:18:21.161568  619015 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18585-610499/.minikube/machines/addons-647801/id_rsa Username:docker}
	I0408 18:18:21.243298  619015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-610499/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 18:18:21.274702  619015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-610499/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0408 18:18:21.301993  619015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-610499/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 18:18:21.331108  619015 provision.go:87] duration metric: took 347.159939ms to configureAuth
	I0408 18:18:21.331140  619015 buildroot.go:189] setting minikube options for container-runtime
	I0408 18:18:21.331350  619015 config.go:182] Loaded profile config "addons-647801": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 18:18:21.331393  619015 main.go:141] libmachine: Checking connection to Docker...
	I0408 18:18:21.331415  619015 main.go:141] libmachine: (addons-647801) Calling .GetURL
	I0408 18:18:21.332740  619015 main.go:141] libmachine: (addons-647801) DBG | Using libvirt version 6000000
	I0408 18:18:21.334914  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:21.335226  619015 main.go:141] libmachine: (addons-647801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:16:c0", ip: ""} in network mk-addons-647801: {Iface:virbr1 ExpiryTime:2024-04-08 19:18:12 +0000 UTC Type:0 Mac:52:54:00:33:16:c0 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-647801 Clientid:01:52:54:00:33:16:c0}
	I0408 18:18:21.335255  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined IP address 192.168.39.113 and MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:21.335432  619015 main.go:141] libmachine: Docker is up and running!
	I0408 18:18:21.335478  619015 main.go:141] libmachine: Reticulating splines...
	I0408 18:18:21.335498  619015 client.go:171] duration metric: took 24.800190867s to LocalClient.Create
	I0408 18:18:21.335561  619015 start.go:167] duration metric: took 24.800296753s to libmachine.API.Create "addons-647801"
	I0408 18:18:21.335575  619015 start.go:293] postStartSetup for "addons-647801" (driver="kvm2")
	I0408 18:18:21.335592  619015 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 18:18:21.335619  619015 main.go:141] libmachine: (addons-647801) Calling .DriverName
	I0408 18:18:21.335912  619015 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 18:18:21.335944  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHHostname
	I0408 18:18:21.338120  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:21.338500  619015 main.go:141] libmachine: (addons-647801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:16:c0", ip: ""} in network mk-addons-647801: {Iface:virbr1 ExpiryTime:2024-04-08 19:18:12 +0000 UTC Type:0 Mac:52:54:00:33:16:c0 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-647801 Clientid:01:52:54:00:33:16:c0}
	I0408 18:18:21.338530  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined IP address 192.168.39.113 and MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:21.338707  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHPort
	I0408 18:18:21.338912  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHKeyPath
	I0408 18:18:21.339074  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHUsername
	I0408 18:18:21.339203  619015 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18585-610499/.minikube/machines/addons-647801/id_rsa Username:docker}
	I0408 18:18:21.423428  619015 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 18:18:21.428737  619015 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 18:18:21.428778  619015 filesync.go:126] Scanning /home/jenkins/minikube-integration/18585-610499/.minikube/addons for local assets ...
	I0408 18:18:21.428870  619015 filesync.go:126] Scanning /home/jenkins/minikube-integration/18585-610499/.minikube/files for local assets ...
	I0408 18:18:21.428895  619015 start.go:296] duration metric: took 93.313775ms for postStartSetup
	I0408 18:18:21.428945  619015 main.go:141] libmachine: (addons-647801) Calling .GetConfigRaw
	I0408 18:18:21.440099  619015 main.go:141] libmachine: (addons-647801) Calling .GetIP
	I0408 18:18:21.442773  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:21.443306  619015 main.go:141] libmachine: (addons-647801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:16:c0", ip: ""} in network mk-addons-647801: {Iface:virbr1 ExpiryTime:2024-04-08 19:18:12 +0000 UTC Type:0 Mac:52:54:00:33:16:c0 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-647801 Clientid:01:52:54:00:33:16:c0}
	I0408 18:18:21.443345  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined IP address 192.168.39.113 and MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:21.443600  619015 profile.go:143] Saving config to /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/config.json ...
	I0408 18:18:21.443783  619015 start.go:128] duration metric: took 24.927503299s to createHost
	I0408 18:18:21.443811  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHHostname
	I0408 18:18:21.446001  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:21.566259  619015 main.go:141] libmachine: (addons-647801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:16:c0", ip: ""} in network mk-addons-647801: {Iface:virbr1 ExpiryTime:2024-04-08 19:18:12 +0000 UTC Type:0 Mac:52:54:00:33:16:c0 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-647801 Clientid:01:52:54:00:33:16:c0}
	I0408 18:18:21.446435  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHPort
	I0408 18:18:21.566297  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined IP address 192.168.39.113 and MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:21.566654  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHKeyPath
	I0408 18:18:21.566881  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHKeyPath
	I0408 18:18:21.567147  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHUsername
	I0408 18:18:21.567387  619015 main.go:141] libmachine: Using SSH client type: native
	I0408 18:18:21.567636  619015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I0408 18:18:21.567651  619015 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0408 18:18:21.669229  619015 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712600301.651719370
	
	I0408 18:18:21.669265  619015 fix.go:216] guest clock: 1712600301.651719370
	I0408 18:18:21.669275  619015 fix.go:229] Guest: 2024-04-08 18:18:21.65171937 +0000 UTC Remote: 2024-04-08 18:18:21.443797775 +0000 UTC m=+25.048036620 (delta=207.921595ms)
	I0408 18:18:21.669344  619015 fix.go:200] guest clock delta is within tolerance: 207.921595ms
	I0408 18:18:21.669357  619015 start.go:83] releasing machines lock for "addons-647801", held for 25.153158001s
	I0408 18:18:21.669396  619015 main.go:141] libmachine: (addons-647801) Calling .DriverName
	I0408 18:18:21.669722  619015 main.go:141] libmachine: (addons-647801) Calling .GetIP
	I0408 18:18:21.672442  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:21.672746  619015 main.go:141] libmachine: (addons-647801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:16:c0", ip: ""} in network mk-addons-647801: {Iface:virbr1 ExpiryTime:2024-04-08 19:18:12 +0000 UTC Type:0 Mac:52:54:00:33:16:c0 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-647801 Clientid:01:52:54:00:33:16:c0}
	I0408 18:18:21.672772  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined IP address 192.168.39.113 and MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:21.672963  619015 main.go:141] libmachine: (addons-647801) Calling .DriverName
	I0408 18:18:21.673477  619015 main.go:141] libmachine: (addons-647801) Calling .DriverName
	I0408 18:18:21.673648  619015 main.go:141] libmachine: (addons-647801) Calling .DriverName
	I0408 18:18:21.673743  619015 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 18:18:21.673784  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHHostname
	I0408 18:18:21.673857  619015 ssh_runner.go:195] Run: cat /version.json
	I0408 18:18:21.673874  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHHostname
	I0408 18:18:21.676725  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:21.676762  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:21.677118  619015 main.go:141] libmachine: (addons-647801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:16:c0", ip: ""} in network mk-addons-647801: {Iface:virbr1 ExpiryTime:2024-04-08 19:18:12 +0000 UTC Type:0 Mac:52:54:00:33:16:c0 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-647801 Clientid:01:52:54:00:33:16:c0}
	I0408 18:18:21.677146  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined IP address 192.168.39.113 and MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:21.677178  619015 main.go:141] libmachine: (addons-647801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:16:c0", ip: ""} in network mk-addons-647801: {Iface:virbr1 ExpiryTime:2024-04-08 19:18:12 +0000 UTC Type:0 Mac:52:54:00:33:16:c0 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-647801 Clientid:01:52:54:00:33:16:c0}
	I0408 18:18:21.677214  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined IP address 192.168.39.113 and MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:21.677342  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHPort
	I0408 18:18:21.677469  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHPort
	I0408 18:18:21.677594  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHKeyPath
	I0408 18:18:21.677727  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHKeyPath
	I0408 18:18:21.677800  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHUsername
	I0408 18:18:21.677866  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHUsername
	I0408 18:18:21.677920  619015 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18585-610499/.minikube/machines/addons-647801/id_rsa Username:docker}
	I0408 18:18:21.677992  619015 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18585-610499/.minikube/machines/addons-647801/id_rsa Username:docker}
	I0408 18:18:21.758783  619015 ssh_runner.go:195] Run: systemctl --version
	I0408 18:18:21.790919  619015 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 18:18:21.798082  619015 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 18:18:21.798183  619015 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 18:18:21.817263  619015 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 18:18:21.817306  619015 start.go:494] detecting cgroup driver to use...
	I0408 18:18:21.817407  619015 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0408 18:18:22.128060  619015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0408 18:18:22.144290  619015 docker.go:217] disabling cri-docker service (if available) ...
	I0408 18:18:22.144364  619015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 18:18:22.161195  619015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 18:18:22.178173  619015 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 18:18:22.316034  619015 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 18:18:22.501186  619015 docker.go:233] disabling docker service ...
	I0408 18:18:22.501274  619015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 18:18:22.519638  619015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 18:18:22.534672  619015 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 18:18:22.678570  619015 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 18:18:22.824064  619015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 18:18:22.840256  619015 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 18:18:22.862146  619015 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0408 18:18:22.874876  619015 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0408 18:18:22.887834  619015 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0408 18:18:22.887909  619015 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0408 18:18:22.900898  619015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 18:18:22.914058  619015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0408 18:18:22.927482  619015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 18:18:22.940143  619015 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 18:18:22.953877  619015 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0408 18:18:22.967077  619015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0408 18:18:22.980085  619015 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0408 18:18:22.993149  619015 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 18:18:23.004691  619015 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 18:18:23.004756  619015 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 18:18:23.021677  619015 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 18:18:23.033736  619015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 18:18:23.162189  619015 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0408 18:18:23.195991  619015 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0408 18:18:23.196142  619015 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0408 18:18:23.202106  619015 retry.go:31] will retry after 768.954ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0408 18:18:23.972197  619015 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0408 18:18:23.978111  619015 start.go:562] Will wait 60s for crictl version
	I0408 18:18:23.978193  619015 ssh_runner.go:195] Run: which crictl
	I0408 18:18:23.982600  619015 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 18:18:24.018931  619015 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.14
	RuntimeApiVersion:  v1
	I0408 18:18:24.019036  619015 ssh_runner.go:195] Run: containerd --version
	I0408 18:18:24.053843  619015 ssh_runner.go:195] Run: containerd --version
	I0408 18:18:24.089920  619015 out.go:177] * Preparing Kubernetes v1.29.3 on containerd 1.7.14 ...
	I0408 18:18:24.091764  619015 main.go:141] libmachine: (addons-647801) Calling .GetIP
	I0408 18:18:24.094776  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:24.095112  619015 main.go:141] libmachine: (addons-647801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:16:c0", ip: ""} in network mk-addons-647801: {Iface:virbr1 ExpiryTime:2024-04-08 19:18:12 +0000 UTC Type:0 Mac:52:54:00:33:16:c0 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-647801 Clientid:01:52:54:00:33:16:c0}
	I0408 18:18:24.095139  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined IP address 192.168.39.113 and MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:24.095436  619015 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0408 18:18:24.100322  619015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 18:18:24.115458  619015 kubeadm.go:877] updating cluster {Name:addons-647801 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.
3 ClusterName:addons-647801 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.113 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 18:18:24.115610  619015 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0408 18:18:24.115661  619015 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 18:18:24.151054  619015 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0408 18:18:24.151134  619015 ssh_runner.go:195] Run: which lz4
	I0408 18:18:24.155918  619015 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0408 18:18:24.160837  619015 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 18:18:24.160882  619015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-610499/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (402346652 bytes)
	I0408 18:18:25.839118  619015 containerd.go:563] duration metric: took 1.683237421s to copy over tarball
	I0408 18:18:25.839236  619015 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 18:18:28.473778  619015 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.634504873s)
	I0408 18:18:28.473809  619015 containerd.go:570] duration metric: took 2.634634505s to extract the tarball
	I0408 18:18:28.473816  619015 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 18:18:28.516878  619015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 18:18:28.635601  619015 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0408 18:18:28.670631  619015 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 18:18:28.721575  619015 retry.go:31] will retry after 259.238371ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-08T18:18:28Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0408 18:18:28.981087  619015 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 18:18:29.028744  619015 containerd.go:627] all images are preloaded for containerd runtime.
	I0408 18:18:29.028780  619015 cache_images.go:84] Images are preloaded, skipping loading
	I0408 18:18:29.028792  619015 kubeadm.go:928] updating node { 192.168.39.113 8443 v1.29.3 containerd true true} ...
	I0408 18:18:29.028957  619015 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-647801 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.113
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:addons-647801 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 18:18:29.029035  619015 ssh_runner.go:195] Run: sudo crictl info
	I0408 18:18:29.067090  619015 cni.go:84] Creating CNI manager for ""
	I0408 18:18:29.067120  619015 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0408 18:18:29.067130  619015 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 18:18:29.067153  619015 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.113 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-647801 NodeName:addons-647801 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.113"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.113 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 18:18:29.067301  619015 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.113
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-647801"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.113
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.113"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 18:18:29.067368  619015 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0408 18:18:29.078971  619015 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 18:18:29.079063  619015 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 18:18:29.089972  619015 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0408 18:18:29.109116  619015 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 18:18:29.131660  619015 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2173 bytes)
	I0408 18:18:29.153062  619015 ssh_runner.go:195] Run: grep 192.168.39.113	control-plane.minikube.internal$ /etc/hosts
	I0408 18:18:29.157913  619015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.113	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 18:18:29.173783  619015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 18:18:29.310105  619015 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 18:18:29.340740  619015 certs.go:68] Setting up /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801 for IP: 192.168.39.113
	I0408 18:18:29.340770  619015 certs.go:194] generating shared ca certs ...
	I0408 18:18:29.340790  619015 certs.go:226] acquiring lock for ca certs: {Name:mk12ba796c58019cc65f7e4b3cead2742d729fea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:18:29.340957  619015 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18585-610499/.minikube/ca.key
	I0408 18:18:29.453450  619015 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18585-610499/.minikube/ca.crt ...
	I0408 18:18:29.453488  619015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-610499/.minikube/ca.crt: {Name:mk929688d914e13e39ba2de341b51604990b0b73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:18:29.453683  619015 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18585-610499/.minikube/ca.key ...
	I0408 18:18:29.453698  619015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-610499/.minikube/ca.key: {Name:mk4d14285054a31287d2f85a348c20fea0d12a9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:18:29.453792  619015 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18585-610499/.minikube/proxy-client-ca.key
	I0408 18:18:29.713632  619015 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18585-610499/.minikube/proxy-client-ca.crt ...
	I0408 18:18:29.713666  619015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-610499/.minikube/proxy-client-ca.crt: {Name:mk976849602666534252ba97c625de112fa047e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:18:29.713871  619015 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18585-610499/.minikube/proxy-client-ca.key ...
	I0408 18:18:29.713887  619015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-610499/.minikube/proxy-client-ca.key: {Name:mk2bc8ca751b7b4af9171f8ba201af6a8cd1a72a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:18:29.713991  619015 certs.go:256] generating profile certs ...
	I0408 18:18:29.714058  619015 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.key
	I0408 18:18:29.714074  619015 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.crt with IP's: []
	I0408 18:18:29.777424  619015 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.crt ...
	I0408 18:18:29.777461  619015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.crt: {Name:mk9ab716d17d94e6d78b63fc747687f9c82a87cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:18:29.777670  619015 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.key ...
	I0408 18:18:29.777686  619015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.key: {Name:mkf1e14ca7f758a1cd97a5e93ecafbb851dbceee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:18:29.777799  619015 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/apiserver.key.0bc55464
	I0408 18:18:29.777822  619015 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/apiserver.crt.0bc55464 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.113]
	I0408 18:18:29.886591  619015 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/apiserver.crt.0bc55464 ...
	I0408 18:18:29.886630  619015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/apiserver.crt.0bc55464: {Name:mk2b1859897645699b6a377186a68822899e44a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:18:29.886824  619015 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/apiserver.key.0bc55464 ...
	I0408 18:18:29.886843  619015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/apiserver.key.0bc55464: {Name:mk6f14fb998b31a1b88d6061bd05676ef4a54152 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:18:29.886940  619015 certs.go:381] copying /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/apiserver.crt.0bc55464 -> /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/apiserver.crt
	I0408 18:18:29.887038  619015 certs.go:385] copying /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/apiserver.key.0bc55464 -> /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/apiserver.key
	I0408 18:18:29.887090  619015 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/proxy-client.key
	I0408 18:18:29.887109  619015 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/proxy-client.crt with IP's: []
	I0408 18:18:30.026764  619015 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/proxy-client.crt ...
	I0408 18:18:30.026804  619015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/proxy-client.crt: {Name:mkeed88aa9d789bc16aa96a06d653ef638614ca2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:18:30.026992  619015 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/proxy-client.key ...
	I0408 18:18:30.027009  619015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/proxy-client.key: {Name:mk057e22a671ea30435f05828b798b8133459061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:18:30.027226  619015 certs.go:484] found cert: /home/jenkins/minikube-integration/18585-610499/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 18:18:30.027265  619015 certs.go:484] found cert: /home/jenkins/minikube-integration/18585-610499/.minikube/certs/ca.pem (1082 bytes)
	I0408 18:18:30.027289  619015 certs.go:484] found cert: /home/jenkins/minikube-integration/18585-610499/.minikube/certs/cert.pem (1123 bytes)
	I0408 18:18:30.027318  619015 certs.go:484] found cert: /home/jenkins/minikube-integration/18585-610499/.minikube/certs/key.pem (1679 bytes)
	I0408 18:18:30.028005  619015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-610499/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 18:18:30.070404  619015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-610499/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 18:18:30.104125  619015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-610499/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 18:18:30.140155  619015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-610499/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0408 18:18:30.169633  619015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0408 18:18:30.197620  619015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 18:18:30.225079  619015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 18:18:30.252565  619015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0408 18:18:30.281503  619015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18585-610499/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 18:18:30.310126  619015 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 18:18:30.330935  619015 ssh_runner.go:195] Run: openssl version
	I0408 18:18:30.337813  619015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 18:18:30.350229  619015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 18:18:30.355705  619015 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 18:18 /usr/share/ca-certificates/minikubeCA.pem
	I0408 18:18:30.355772  619015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 18:18:30.362156  619015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 18:18:30.374364  619015 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 18:18:30.379932  619015 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0408 18:18:30.379995  619015 kubeadm.go:391] StartCluster: {Name:addons-647801 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 C
lusterName:addons-647801 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.113 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 18:18:30.380076  619015 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0408 18:18:30.380124  619015 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 18:18:30.423122  619015 cri.go:89] found id: ""
	I0408 18:18:30.423204  619015 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0408 18:18:30.434900  619015 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 18:18:30.446119  619015 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 18:18:30.457513  619015 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 18:18:30.457537  619015 kubeadm.go:156] found existing configuration files:
	
	I0408 18:18:30.457596  619015 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 18:18:30.468147  619015 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 18:18:30.468211  619015 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 18:18:30.479879  619015 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 18:18:30.490460  619015 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 18:18:30.490526  619015 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 18:18:30.501752  619015 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 18:18:30.512761  619015 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 18:18:30.512832  619015 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 18:18:30.524281  619015 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 18:18:30.535706  619015 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 18:18:30.535771  619015 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 18:18:30.547287  619015 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 18:18:30.740489  619015 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 18:18:42.349862  619015 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0408 18:18:42.349933  619015 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 18:18:42.350016  619015 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 18:18:42.350128  619015 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 18:18:42.350272  619015 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 18:18:42.350345  619015 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 18:18:42.352083  619015 out.go:204]   - Generating certificates and keys ...
	I0408 18:18:42.352184  619015 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 18:18:42.352256  619015 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 18:18:42.352323  619015 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0408 18:18:42.352428  619015 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0408 18:18:42.352529  619015 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0408 18:18:42.352619  619015 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0408 18:18:42.352696  619015 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0408 18:18:42.352868  619015 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-647801 localhost] and IPs [192.168.39.113 127.0.0.1 ::1]
	I0408 18:18:42.352965  619015 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0408 18:18:42.353124  619015 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-647801 localhost] and IPs [192.168.39.113 127.0.0.1 ::1]
	I0408 18:18:42.353219  619015 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0408 18:18:42.353312  619015 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0408 18:18:42.353384  619015 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0408 18:18:42.353447  619015 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 18:18:42.353531  619015 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 18:18:42.353600  619015 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0408 18:18:42.353642  619015 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 18:18:42.353695  619015 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 18:18:42.353761  619015 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 18:18:42.353873  619015 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 18:18:42.353968  619015 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 18:18:42.355354  619015 out.go:204]   - Booting up control plane ...
	I0408 18:18:42.355461  619015 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 18:18:42.355578  619015 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 18:18:42.355663  619015 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 18:18:42.355763  619015 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 18:18:42.355840  619015 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 18:18:42.355874  619015 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 18:18:42.356000  619015 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 18:18:42.356070  619015 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.502989 seconds
	I0408 18:18:42.356165  619015 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0408 18:18:42.356271  619015 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0408 18:18:42.356332  619015 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0408 18:18:42.356506  619015 kubeadm.go:309] [mark-control-plane] Marking the node addons-647801 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0408 18:18:42.356562  619015 kubeadm.go:309] [bootstrap-token] Using token: fna5ik.wkag52bt9lyssw63
	I0408 18:18:42.358157  619015 out.go:204]   - Configuring RBAC rules ...
	I0408 18:18:42.358248  619015 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0408 18:18:42.358325  619015 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0408 18:18:42.358439  619015 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0408 18:18:42.358550  619015 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0408 18:18:42.358662  619015 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0408 18:18:42.358735  619015 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0408 18:18:42.358832  619015 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0408 18:18:42.358875  619015 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0408 18:18:42.358916  619015 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0408 18:18:42.358923  619015 kubeadm.go:309] 
	I0408 18:18:42.358985  619015 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0408 18:18:42.359019  619015 kubeadm.go:309] 
	I0408 18:18:42.359116  619015 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0408 18:18:42.359133  619015 kubeadm.go:309] 
	I0408 18:18:42.359169  619015 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0408 18:18:42.359258  619015 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0408 18:18:42.359304  619015 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0408 18:18:42.359311  619015 kubeadm.go:309] 
	I0408 18:18:42.359390  619015 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0408 18:18:42.359406  619015 kubeadm.go:309] 
	I0408 18:18:42.359471  619015 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0408 18:18:42.359484  619015 kubeadm.go:309] 
	I0408 18:18:42.359561  619015 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0408 18:18:42.359655  619015 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0408 18:18:42.359754  619015 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0408 18:18:42.359765  619015 kubeadm.go:309] 
	I0408 18:18:42.359874  619015 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0408 18:18:42.359976  619015 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0408 18:18:42.359987  619015 kubeadm.go:309] 
	I0408 18:18:42.360098  619015 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token fna5ik.wkag52bt9lyssw63 \
	I0408 18:18:42.360238  619015 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a61b0b8fd1721a2d16156bb778902f7baa4f8ff6bd48b6595518334d2e35842f \
	I0408 18:18:42.360272  619015 kubeadm.go:309] 	--control-plane 
	I0408 18:18:42.360280  619015 kubeadm.go:309] 
	I0408 18:18:42.360351  619015 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0408 18:18:42.360358  619015 kubeadm.go:309] 
	I0408 18:18:42.360421  619015 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token fna5ik.wkag52bt9lyssw63 \
	I0408 18:18:42.360516  619015 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a61b0b8fd1721a2d16156bb778902f7baa4f8ff6bd48b6595518334d2e35842f 
	I0408 18:18:42.360528  619015 cni.go:84] Creating CNI manager for ""
	I0408 18:18:42.360539  619015 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0408 18:18:42.362078  619015 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 18:18:42.363344  619015 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 18:18:42.383697  619015 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 18:18:42.443461  619015 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 18:18:42.443656  619015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:18:42.443661  619015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-647801 minikube.k8s.io/updated_at=2024_04_08T18_18_42_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=f9de8f0b190a4305b11b3a925ec3e499cf3fc021 minikube.k8s.io/name=addons-647801 minikube.k8s.io/primary=true
	I0408 18:18:42.528800  619015 ops.go:34] apiserver oom_adj: -16
	I0408 18:18:42.658955  619015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:18:43.159408  619015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:18:43.659150  619015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:18:44.159657  619015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:18:44.659006  619015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:18:45.159960  619015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:18:45.659955  619015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:18:46.159216  619015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:18:46.660024  619015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:18:47.159144  619015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:18:47.659182  619015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:18:48.159150  619015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:18:48.659260  619015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:18:49.159026  619015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:18:49.659621  619015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:18:50.159844  619015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:18:50.659700  619015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:18:51.159575  619015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:18:51.659890  619015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:18:52.159362  619015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:18:52.659332  619015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:18:53.159475  619015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:18:53.659855  619015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:18:54.159582  619015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:18:54.659165  619015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:18:55.159232  619015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:18:55.659236  619015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:18:55.816515  619015 kubeadm.go:1107] duration metric: took 13.372953124s to wait for elevateKubeSystemPrivileges
	W0408 18:18:55.816584  619015 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0408 18:18:55.816594  619015 kubeadm.go:393] duration metric: took 25.436614139s to StartCluster
	I0408 18:18:55.816621  619015 settings.go:142] acquiring lock: {Name:mk7328ab96b6bbc341353227736a59c6a6c111ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:18:55.816771  619015 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18585-610499/kubeconfig
	I0408 18:18:55.817253  619015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18585-610499/kubeconfig: {Name:mkf6ab43abc79cd756921b633f01d085c9d5bb68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:18:55.817467  619015 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0408 18:18:55.817530  619015 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.113 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0408 18:18:55.819520  619015 out.go:177] * Verifying Kubernetes components...
	I0408 18:18:55.817587  619015 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0408 18:18:55.817750  619015 config.go:182] Loaded profile config "addons-647801": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 18:18:55.819626  619015 addons.go:69] Setting default-storageclass=true in profile "addons-647801"
	I0408 18:18:55.821247  619015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 18:18:55.821273  619015 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-647801"
	I0408 18:18:55.819633  619015 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-647801"
	I0408 18:18:55.819640  619015 addons.go:69] Setting yakd=true in profile "addons-647801"
	I0408 18:18:55.821466  619015 addons.go:234] Setting addon yakd=true in "addons-647801"
	I0408 18:18:55.821489  619015 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-647801"
	I0408 18:18:55.821512  619015 host.go:66] Checking if "addons-647801" exists ...
	I0408 18:18:55.819640  619015 addons.go:69] Setting cloud-spanner=true in profile "addons-647801"
	I0408 18:18:55.819653  619015 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-647801"
	I0408 18:18:55.819656  619015 addons.go:69] Setting storage-provisioner=true in profile "addons-647801"
	I0408 18:18:55.819661  619015 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-647801"
	I0408 18:18:55.819670  619015 addons.go:69] Setting volumesnapshots=true in profile "addons-647801"
	I0408 18:18:55.819667  619015 addons.go:69] Setting ingress=true in profile "addons-647801"
	I0408 18:18:55.819672  619015 addons.go:69] Setting metrics-server=true in profile "addons-647801"
	I0408 18:18:55.819665  619015 addons.go:69] Setting registry=true in profile "addons-647801"
	I0408 18:18:55.819683  619015 addons.go:69] Setting ingress-dns=true in profile "addons-647801"
	I0408 18:18:55.819684  619015 addons.go:69] Setting inspektor-gadget=true in profile "addons-647801"
	I0408 18:18:55.819679  619015 addons.go:69] Setting gcp-auth=true in profile "addons-647801"
	I0408 18:18:55.819718  619015 addons.go:69] Setting helm-tiller=true in profile "addons-647801"
	I0408 18:18:55.821568  619015 addons.go:234] Setting addon cloud-spanner=true in "addons-647801"
	I0408 18:18:55.821584  619015 host.go:66] Checking if "addons-647801" exists ...
	I0408 18:18:55.821592  619015 addons.go:234] Setting addon volumesnapshots=true in "addons-647801"
	I0408 18:18:55.821611  619015 host.go:66] Checking if "addons-647801" exists ...
	I0408 18:18:55.821618  619015 host.go:66] Checking if "addons-647801" exists ...
	I0408 18:18:55.821839  619015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:18:55.821874  619015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:18:55.821886  619015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:18:55.821930  619015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:18:55.822000  619015 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-647801"
	I0408 18:18:55.822013  619015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:18:55.822036  619015 addons.go:234] Setting addon ingress=true in "addons-647801"
	I0408 18:18:55.822067  619015 addons.go:234] Setting addon storage-provisioner=true in "addons-647801"
	I0408 18:18:55.822120  619015 host.go:66] Checking if "addons-647801" exists ...
	I0408 18:18:55.822147  619015 addons.go:234] Setting addon registry=true in "addons-647801"
	I0408 18:18:55.822174  619015 host.go:66] Checking if "addons-647801" exists ...
	I0408 18:18:55.822173  619015 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-647801"
	I0408 18:18:55.822168  619015 addons.go:234] Setting addon helm-tiller=true in "addons-647801"
	I0408 18:18:55.822258  619015 host.go:66] Checking if "addons-647801" exists ...
	I0408 18:18:55.822042  619015 host.go:66] Checking if "addons-647801" exists ...
	I0408 18:18:55.822475  619015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:18:55.822491  619015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:18:55.822531  619015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:18:55.822124  619015 addons.go:234] Setting addon metrics-server=true in "addons-647801"
	I0408 18:18:55.822557  619015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:18:55.822602  619015 addons.go:234] Setting addon inspektor-gadget=true in "addons-647801"
	I0408 18:18:55.822623  619015 host.go:66] Checking if "addons-647801" exists ...
	I0408 18:18:55.822678  619015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:18:55.822721  619015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:18:55.822729  619015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:18:55.822751  619015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:18:55.822075  619015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:18:55.822150  619015 host.go:66] Checking if "addons-647801" exists ...
	I0408 18:18:55.822531  619015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:18:55.822930  619015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:18:55.822933  619015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:18:55.822962  619015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:18:55.823273  619015 mustload.go:65] Loading cluster: addons-647801
	I0408 18:18:55.823502  619015 config.go:182] Loaded profile config "addons-647801": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 18:18:55.823921  619015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:18:55.823986  619015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:18:55.821999  619015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:18:55.826371  619015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:18:55.821997  619015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:18:55.826556  619015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:18:55.822099  619015 addons.go:234] Setting addon ingress-dns=true in "addons-647801"
	I0408 18:18:55.826707  619015 host.go:66] Checking if "addons-647801" exists ...
	I0408 18:18:55.827079  619015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:18:55.827122  619015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:18:55.822557  619015 host.go:66] Checking if "addons-647801" exists ...
	I0408 18:18:55.832488  619015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:18:55.832522  619015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:18:55.843281  619015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43477
	I0408 18:18:55.843311  619015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45093
	I0408 18:18:55.843490  619015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38993
	I0408 18:18:55.843967  619015 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:18:55.844175  619015 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:18:55.844270  619015 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:18:55.844538  619015 main.go:141] libmachine: Using API Version  1
	I0408 18:18:55.844552  619015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:18:55.844684  619015 main.go:141] libmachine: Using API Version  1
	I0408 18:18:55.844703  619015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:18:55.844707  619015 main.go:141] libmachine: Using API Version  1
	I0408 18:18:55.844719  619015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:18:55.845008  619015 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:18:55.845063  619015 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:18:55.845370  619015 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:18:55.845380  619015 main.go:141] libmachine: (addons-647801) Calling .GetState
	I0408 18:18:55.845808  619015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:18:55.845856  619015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:18:55.846164  619015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40535
	I0408 18:18:55.847427  619015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:18:55.847455  619015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:18:55.849655  619015 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-647801"
	I0408 18:18:55.849706  619015 host.go:66] Checking if "addons-647801" exists ...
	I0408 18:18:55.850109  619015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:18:55.850158  619015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:18:55.852248  619015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:18:55.852285  619015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:18:55.852880  619015 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:18:55.853541  619015 main.go:141] libmachine: Using API Version  1
	I0408 18:18:55.853588  619015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:18:55.854148  619015 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:18:55.854373  619015 main.go:141] libmachine: (addons-647801) Calling .GetState
	I0408 18:18:55.857358  619015 addons.go:234] Setting addon default-storageclass=true in "addons-647801"
	I0408 18:18:55.857440  619015 host.go:66] Checking if "addons-647801" exists ...
	I0408 18:18:55.857880  619015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43947
	I0408 18:18:55.858178  619015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:18:55.858286  619015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:18:55.858779  619015 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:18:55.859383  619015 main.go:141] libmachine: Using API Version  1
	I0408 18:18:55.859402  619015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:18:55.859807  619015 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:18:55.860436  619015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:18:55.860472  619015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:18:55.870810  619015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36469
	I0408 18:18:55.871589  619015 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:18:55.872430  619015 main.go:141] libmachine: Using API Version  1
	I0408 18:18:55.872466  619015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:18:55.872880  619015 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:18:55.873743  619015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:18:55.873785  619015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:18:55.874054  619015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32849
	I0408 18:18:55.876600  619015 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:18:55.877246  619015 main.go:141] libmachine: Using API Version  1
	I0408 18:18:55.877290  619015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:18:55.877723  619015 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:18:55.878314  619015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:18:55.878348  619015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:18:55.879841  619015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46793
	I0408 18:18:55.880344  619015 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:18:55.880928  619015 main.go:141] libmachine: Using API Version  1
	I0408 18:18:55.880944  619015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:18:55.881309  619015 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:18:55.881903  619015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:18:55.881940  619015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:18:55.882173  619015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46339
	I0408 18:18:55.882711  619015 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:18:55.883246  619015 main.go:141] libmachine: Using API Version  1
	I0408 18:18:55.883270  619015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:18:55.883729  619015 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:18:55.884415  619015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:18:55.884465  619015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:18:55.884774  619015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44825
	I0408 18:18:55.885279  619015 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:18:55.885911  619015 main.go:141] libmachine: Using API Version  1
	I0408 18:18:55.885950  619015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:18:55.886324  619015 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:18:55.886873  619015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:18:55.886917  619015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:18:55.897076  619015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45219
	I0408 18:18:55.897762  619015 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:18:55.898347  619015 main.go:141] libmachine: Using API Version  1
	I0408 18:18:55.898369  619015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:18:55.898867  619015 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:18:55.899619  619015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:18:55.899661  619015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:18:55.901548  619015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36205
	I0408 18:18:55.902277  619015 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:18:55.902349  619015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34567
	I0408 18:18:55.902944  619015 main.go:141] libmachine: Using API Version  1
	I0408 18:18:55.902969  619015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:18:55.903199  619015 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:18:55.903464  619015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44643
	I0408 18:18:55.903466  619015 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:18:55.903763  619015 main.go:141] libmachine: Using API Version  1
	I0408 18:18:55.903778  619015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:18:55.904126  619015 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:18:55.904225  619015 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:18:55.904434  619015 main.go:141] libmachine: (addons-647801) Calling .GetState
	I0408 18:18:55.904499  619015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37579
	I0408 18:18:55.904696  619015 main.go:141] libmachine: Using API Version  1
	I0408 18:18:55.904721  619015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:18:55.905460  619015 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:18:55.905485  619015 main.go:141] libmachine: (addons-647801) Calling .GetState
	I0408 18:18:55.905536  619015 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:18:55.905554  619015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35113
	I0408 18:18:55.905779  619015 main.go:141] libmachine: (addons-647801) Calling .GetState
	I0408 18:18:55.905997  619015 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:18:55.907421  619015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42057
	I0408 18:18:55.907952  619015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44889
	I0408 18:18:55.908082  619015 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:18:55.908322  619015 main.go:141] libmachine: Using API Version  1
	I0408 18:18:55.908350  619015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:18:55.908379  619015 main.go:141] libmachine: (addons-647801) Calling .DriverName
	I0408 18:18:55.908500  619015 main.go:141] libmachine: Using API Version  1
	I0408 18:18:55.908513  619015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:18:55.908565  619015 main.go:141] libmachine: Using API Version  1
	I0408 18:18:55.908588  619015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:18:55.910590  619015 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0408 18:18:55.908770  619015 host.go:66] Checking if "addons-647801" exists ...
	I0408 18:18:55.908809  619015 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:18:55.909057  619015 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:18:55.909139  619015 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:18:55.909640  619015 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:18:55.910391  619015 main.go:141] libmachine: (addons-647801) Calling .DriverName
	I0408 18:18:55.912018  619015 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0408 18:18:55.912031  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0408 18:18:55.912057  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHHostname
	I0408 18:18:55.913548  619015 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0408 18:18:55.915224  619015 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0408 18:18:55.915245  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0408 18:18:55.915269  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHHostname
	I0408 18:18:55.913829  619015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:18:55.915365  619015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:18:55.913070  619015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:18:55.915480  619015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:18:55.913431  619015 main.go:141] libmachine: Using API Version  1
	I0408 18:18:55.916005  619015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:18:55.912848  619015 main.go:141] libmachine: (addons-647801) Calling .GetState
	I0408 18:18:55.916112  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:55.916131  619015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:18:55.916140  619015 main.go:141] libmachine: (addons-647801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:16:c0", ip: ""} in network mk-addons-647801: {Iface:virbr1 ExpiryTime:2024-04-08 19:18:12 +0000 UTC Type:0 Mac:52:54:00:33:16:c0 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-647801 Clientid:01:52:54:00:33:16:c0}
	I0408 18:18:55.916163  619015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:18:55.916468  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined IP address 192.168.39.113 and MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:55.916619  619015 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:18:55.916643  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHPort
	I0408 18:18:55.917049  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHKeyPath
	I0408 18:18:55.917872  619015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:18:55.917962  619015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:18:55.918324  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHUsername
	I0408 18:18:55.918508  619015 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18585-610499/.minikube/machines/addons-647801/id_rsa Username:docker}
	I0408 18:18:55.919034  619015 main.go:141] libmachine: (addons-647801) Calling .DriverName
	I0408 18:18:55.920979  619015 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0408 18:18:55.919540  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:55.920380  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHPort
	I0408 18:18:55.925943  619015 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0408 18:18:55.923277  619015 main.go:141] libmachine: (addons-647801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:16:c0", ip: ""} in network mk-addons-647801: {Iface:virbr1 ExpiryTime:2024-04-08 19:18:12 +0000 UTC Type:0 Mac:52:54:00:33:16:c0 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-647801 Clientid:01:52:54:00:33:16:c0}
	I0408 18:18:55.923461  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHKeyPath
	I0408 18:18:55.929001  619015 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0408 18:18:55.927919  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined IP address 192.168.39.113 and MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:55.928283  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHUsername
	I0408 18:18:55.930520  619015 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0408 18:18:55.930534  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0408 18:18:55.930557  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHHostname
	I0408 18:18:55.930641  619015 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18585-610499/.minikube/machines/addons-647801/id_rsa Username:docker}
	I0408 18:18:55.930956  619015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39161
	I0408 18:18:55.932199  619015 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:18:55.932917  619015 main.go:141] libmachine: Using API Version  1
	I0408 18:18:55.932949  619015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:18:55.933404  619015 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:18:55.933600  619015 main.go:141] libmachine: (addons-647801) Calling .GetState
	I0408 18:18:55.935768  619015 main.go:141] libmachine: (addons-647801) Calling .DriverName
	I0408 18:18:55.935859  619015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37673
	I0408 18:18:55.937852  619015 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0408 18:18:55.936258  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:55.936652  619015 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:18:55.937217  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHPort
	I0408 18:18:55.938970  619015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43231
	I0408 18:18:55.939311  619015 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0408 18:18:55.939317  619015 main.go:141] libmachine: (addons-647801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:16:c0", ip: ""} in network mk-addons-647801: {Iface:virbr1 ExpiryTime:2024-04-08 19:18:12 +0000 UTC Type:0 Mac:52:54:00:33:16:c0 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-647801 Clientid:01:52:54:00:33:16:c0}
	I0408 18:18:55.939327  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0408 18:18:55.939345  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined IP address 192.168.39.113 and MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:55.939351  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHHostname
	I0408 18:18:55.939737  619015 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:18:55.939772  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHKeyPath
	I0408 18:18:55.940266  619015 main.go:141] libmachine: Using API Version  1
	I0408 18:18:55.940285  619015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:18:55.940297  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHUsername
	I0408 18:18:55.940668  619015 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:18:55.940677  619015 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18585-610499/.minikube/machines/addons-647801/id_rsa Username:docker}
	I0408 18:18:55.941255  619015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:18:55.941304  619015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:18:55.941792  619015 main.go:141] libmachine: Using API Version  1
	I0408 18:18:55.941810  619015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:18:55.941843  619015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43479
	I0408 18:18:55.942467  619015 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:18:55.943106  619015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:18:55.943159  619015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:18:55.943749  619015 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:18:55.944439  619015 main.go:141] libmachine: Using API Version  1
	I0408 18:18:55.944459  619015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:18:55.944550  619015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42663
	I0408 18:18:55.944740  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:55.944959  619015 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:18:55.945048  619015 main.go:141] libmachine: (addons-647801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:16:c0", ip: ""} in network mk-addons-647801: {Iface:virbr1 ExpiryTime:2024-04-08 19:18:12 +0000 UTC Type:0 Mac:52:54:00:33:16:c0 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-647801 Clientid:01:52:54:00:33:16:c0}
	I0408 18:18:55.945064  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined IP address 192.168.39.113 and MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:55.945091  619015 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:18:55.945297  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHPort
	I0408 18:18:55.945483  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHKeyPath
	I0408 18:18:55.945648  619015 main.go:141] libmachine: Using API Version  1
	I0408 18:18:55.945669  619015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:18:55.945791  619015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38667
	I0408 18:18:55.945907  619015 main.go:141] libmachine: (addons-647801) Calling .GetState
	I0408 18:18:55.946171  619015 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:18:55.946268  619015 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:18:55.946317  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHUsername
	I0408 18:18:55.946473  619015 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18585-610499/.minikube/machines/addons-647801/id_rsa Username:docker}
	I0408 18:18:55.946710  619015 main.go:141] libmachine: (addons-647801) Calling .DriverName
	I0408 18:18:55.946901  619015 main.go:141] libmachine: Using API Version  1
	I0408 18:18:55.946923  619015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:18:55.947323  619015 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:18:55.947519  619015 main.go:141] libmachine: (addons-647801) Calling .GetState
	I0408 18:18:55.947756  619015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45445
	I0408 18:18:55.947927  619015 main.go:141] libmachine: (addons-647801) Calling .DriverName
	I0408 18:18:55.948304  619015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40393
	I0408 18:18:55.950161  619015 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0408 18:18:55.948845  619015 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:18:55.948965  619015 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:18:55.949533  619015 main.go:141] libmachine: (addons-647801) Calling .DriverName
	I0408 18:18:55.951618  619015 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0408 18:18:55.951632  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0408 18:18:55.951653  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHHostname
	I0408 18:18:55.954121  619015 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0408 18:18:55.952736  619015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40685
	I0408 18:18:55.953446  619015 main.go:141] libmachine: Using API Version  1
	I0408 18:18:55.953569  619015 main.go:141] libmachine: Using API Version  1
	I0408 18:18:55.955614  619015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:18:55.957260  619015 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0408 18:18:55.956169  619015 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:18:55.956339  619015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:18:55.956584  619015 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:18:55.957411  619015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37215
	I0408 18:18:55.963395  619015 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0408 18:18:55.961715  619015 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:18:55.961772  619015 main.go:141] libmachine: (addons-647801) Calling .GetState
	I0408 18:18:55.962619  619015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33305
	I0408 18:18:55.962663  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:55.963338  619015 main.go:141] libmachine: Using API Version  1
	I0408 18:18:55.964889  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHPort
	I0408 18:18:55.965166  619015 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:18:55.966532  619015 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0408 18:18:55.965380  619015 main.go:141] libmachine: (addons-647801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:16:c0", ip: ""} in network mk-addons-647801: {Iface:virbr1 ExpiryTime:2024-04-08 19:18:12 +0000 UTC Type:0 Mac:52:54:00:33:16:c0 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-647801 Clientid:01:52:54:00:33:16:c0}
	I0408 18:18:55.965397  619015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:18:55.965544  619015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36713
	I0408 18:18:55.965583  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHKeyPath
	I0408 18:18:55.965756  619015 main.go:141] libmachine: (addons-647801) Calling .GetState
	I0408 18:18:55.965877  619015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44459
	I0408 18:18:55.966227  619015 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:18:55.966655  619015 main.go:141] libmachine: Using API Version  1
	I0408 18:18:55.967629  619015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:18:55.967629  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined IP address 192.168.39.113 and MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:55.969919  619015 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0408 18:18:55.966895  619015 main.go:141] libmachine: (addons-647801) Calling .DriverName
	I0408 18:18:55.968893  619015 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:18:55.968893  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHUsername
	I0408 18:18:55.968943  619015 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:18:55.968984  619015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36327
	I0408 18:18:55.969018  619015 main.go:141] libmachine: Using API Version  1
	I0408 18:18:55.969203  619015 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:18:55.969211  619015 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:18:55.969858  619015 main.go:141] libmachine: (addons-647801) Calling .DriverName
	I0408 18:18:55.972332  619015 main.go:141] libmachine: Using API Version  1
	I0408 18:18:55.972360  619015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:18:55.972339  619015 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0408 18:18:55.971667  619015 main.go:141] libmachine: (addons-647801) Calling .GetState
	I0408 18:18:55.971692  619015 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18585-610499/.minikube/machines/addons-647801/id_rsa Username:docker}
	I0408 18:18:55.971751  619015 main.go:141] libmachine: (addons-647801) Calling .GetState
	I0408 18:18:55.971840  619015 main.go:141] libmachine: Using API Version  1
	I0408 18:18:55.972140  619015 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:18:55.971261  619015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:18:55.972829  619015 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:18:55.974856  619015 out.go:177]   - Using image docker.io/registry:2.8.3
	I0408 18:18:55.975162  619015 main.go:141] libmachine: (addons-647801) Calling .DriverName
	I0408 18:18:55.975935  619015 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0408 18:18:55.976011  619015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:18:55.976226  619015 main.go:141] libmachine: (addons-647801) Calling .GetState
	I0408 18:18:55.977067  619015 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0408 18:18:55.976413  619015 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:18:55.976446  619015 main.go:141] libmachine: (addons-647801) Calling .DriverName
	I0408 18:18:55.976737  619015 main.go:141] libmachine: Using API Version  1
	I0408 18:18:55.978362  619015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:18:55.977946  619015 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:18:55.978269  619015 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0408 18:18:55.978689  619015 main.go:141] libmachine: (addons-647801) Calling .GetState
	I0408 18:18:55.978736  619015 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:18:55.978740  619015 main.go:141] libmachine: (addons-647801) Calling .DriverName
	I0408 18:18:55.979665  619015 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0408 18:18:55.979782  619015 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0408 18:18:55.979833  619015 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0408 18:18:55.980010  619015 main.go:141] libmachine: (addons-647801) Calling .GetState
	I0408 18:18:55.981052  619015 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0408 18:18:55.981086  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0408 18:18:55.981254  619015 main.go:141] libmachine: (addons-647801) Calling .DriverName
	I0408 18:18:55.981282  619015 main.go:141] libmachine: (addons-647801) Calling .GetState
	I0408 18:18:55.981367  619015 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 18:18:55.982250  619015 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0408 18:18:55.982268  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHHostname
	I0408 18:18:55.983450  619015 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0408 18:18:55.983623  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0408 18:18:55.983623  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 18:18:55.984013  619015 main.go:141] libmachine: (addons-647801) Calling .DriverName
	I0408 18:18:55.984930  619015 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0408 18:18:55.984939  619015 main.go:141] libmachine: (addons-647801) Calling .DriverName
	I0408 18:18:55.984954  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0408 18:18:55.986214  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHHostname
	I0408 18:18:55.986223  619015 out.go:177]   - Using image docker.io/busybox:stable
	I0408 18:18:55.986234  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHHostname
	I0408 18:18:55.987797  619015 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 18:18:55.989450  619015 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 18:18:55.989466  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 18:18:55.989512  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHHostname
	I0408 18:18:55.987912  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0408 18:18:55.989573  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHHostname
	I0408 18:18:55.987964  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHHostname
	I0408 18:18:55.991114  619015 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0408 18:18:55.988291  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:55.991060  619015 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0408 18:18:55.992593  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0408 18:18:55.992598  619015 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0408 18:18:55.992615  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0408 18:18:55.992617  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHHostname
	I0408 18:18:55.992635  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHHostname
	I0408 18:18:55.992645  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:55.992658  619015 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0408 18:18:55.992698  619015 main.go:141] libmachine: (addons-647801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:16:c0", ip: ""} in network mk-addons-647801: {Iface:virbr1 ExpiryTime:2024-04-08 19:18:12 +0000 UTC Type:0 Mac:52:54:00:33:16:c0 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-647801 Clientid:01:52:54:00:33:16:c0}
	I0408 18:18:55.993819  619015 main.go:141] libmachine: (addons-647801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:16:c0", ip: ""} in network mk-addons-647801: {Iface:virbr1 ExpiryTime:2024-04-08 19:18:12 +0000 UTC Type:0 Mac:52:54:00:33:16:c0 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-647801 Clientid:01:52:54:00:33:16:c0}
	I0408 18:18:55.994638  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined IP address 192.168.39.113 and MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:55.994653  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined IP address 192.168.39.113 and MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:55.994657  619015 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0408 18:18:55.994671  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0408 18:18:55.994690  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHHostname
	I0408 18:18:55.994780  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:55.995030  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHPort
	I0408 18:18:55.995930  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:55.995946  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHPort
	I0408 18:18:55.995962  619015 main.go:141] libmachine: (addons-647801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:16:c0", ip: ""} in network mk-addons-647801: {Iface:virbr1 ExpiryTime:2024-04-08 19:18:12 +0000 UTC Type:0 Mac:52:54:00:33:16:c0 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-647801 Clientid:01:52:54:00:33:16:c0}
	I0408 18:18:55.995975  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined IP address 192.168.39.113 and MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:55.996001  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHKeyPath
	I0408 18:18:55.996137  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHUsername
	I0408 18:18:55.996198  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:55.996233  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHKeyPath
	I0408 18:18:55.996251  619015 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18585-610499/.minikube/machines/addons-647801/id_rsa Username:docker}
	I0408 18:18:55.996639  619015 main.go:141] libmachine: (addons-647801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:16:c0", ip: ""} in network mk-addons-647801: {Iface:virbr1 ExpiryTime:2024-04-08 19:18:12 +0000 UTC Type:0 Mac:52:54:00:33:16:c0 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-647801 Clientid:01:52:54:00:33:16:c0}
	I0408 18:18:55.996660  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined IP address 192.168.39.113 and MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:55.996689  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHPort
	I0408 18:18:55.996740  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHUsername
	I0408 18:18:55.996889  619015 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18585-610499/.minikube/machines/addons-647801/id_rsa Username:docker}
	I0408 18:18:55.997201  619015 main.go:141] libmachine: (addons-647801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:16:c0", ip: ""} in network mk-addons-647801: {Iface:virbr1 ExpiryTime:2024-04-08 19:18:12 +0000 UTC Type:0 Mac:52:54:00:33:16:c0 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-647801 Clientid:01:52:54:00:33:16:c0}
	I0408 18:18:55.997221  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined IP address 192.168.39.113 and MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:55.997407  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHPort
	I0408 18:18:55.997570  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHKeyPath
	I0408 18:18:55.997644  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:55.997671  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHUsername
	I0408 18:18:55.997768  619015 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18585-610499/.minikube/machines/addons-647801/id_rsa Username:docker}
	I0408 18:18:55.997935  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHPort
	I0408 18:18:55.998422  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHPort
	I0408 18:18:55.998421  619015 main.go:141] libmachine: (addons-647801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:16:c0", ip: ""} in network mk-addons-647801: {Iface:virbr1 ExpiryTime:2024-04-08 19:18:12 +0000 UTC Type:0 Mac:52:54:00:33:16:c0 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-647801 Clientid:01:52:54:00:33:16:c0}
	I0408 18:18:55.998461  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHKeyPath
	I0408 18:18:55.998480  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined IP address 192.168.39.113 and MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:55.998591  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHKeyPath
	I0408 18:18:55.998639  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHUsername
	I0408 18:18:55.998806  619015 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18585-610499/.minikube/machines/addons-647801/id_rsa Username:docker}
	I0408 18:18:55.999030  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHUsername
	I0408 18:18:55.999072  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHKeyPath
	I0408 18:18:55.999739  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:55.999735  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:55.999760  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:55.999768  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHUsername
	I0408 18:18:55.999792  619015 main.go:141] libmachine: (addons-647801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:16:c0", ip: ""} in network mk-addons-647801: {Iface:virbr1 ExpiryTime:2024-04-08 19:18:12 +0000 UTC Type:0 Mac:52:54:00:33:16:c0 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-647801 Clientid:01:52:54:00:33:16:c0}
	I0408 18:18:55.999792  619015 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18585-610499/.minikube/machines/addons-647801/id_rsa Username:docker}
	I0408 18:18:55.999822  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined IP address 192.168.39.113 and MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:55.999973  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHPort
	I0408 18:18:56.000008  619015 main.go:141] libmachine: (addons-647801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:16:c0", ip: ""} in network mk-addons-647801: {Iface:virbr1 ExpiryTime:2024-04-08 19:18:12 +0000 UTC Type:0 Mac:52:54:00:33:16:c0 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-647801 Clientid:01:52:54:00:33:16:c0}
	I0408 18:18:56.000024  619015 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18585-610499/.minikube/machines/addons-647801/id_rsa Username:docker}
	I0408 18:18:56.000039  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined IP address 192.168.39.113 and MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:56.000203  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHPort
	I0408 18:18:56.000258  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHKeyPath
	I0408 18:18:56.000472  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHKeyPath
	I0408 18:18:56.000530  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHUsername
	I0408 18:18:56.000533  619015 main.go:141] libmachine: (addons-647801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:16:c0", ip: ""} in network mk-addons-647801: {Iface:virbr1 ExpiryTime:2024-04-08 19:18:12 +0000 UTC Type:0 Mac:52:54:00:33:16:c0 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-647801 Clientid:01:52:54:00:33:16:c0}
	I0408 18:18:56.000550  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined IP address 192.168.39.113 and MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:18:56.000710  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHPort
	I0408 18:18:56.000714  619015 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18585-610499/.minikube/machines/addons-647801/id_rsa Username:docker}
	I0408 18:18:56.000766  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHUsername
	I0408 18:18:56.000866  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHKeyPath
	I0408 18:18:56.000910  619015 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18585-610499/.minikube/machines/addons-647801/id_rsa Username:docker}
	I0408 18:18:56.001010  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHUsername
	I0408 18:18:56.001126  619015 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18585-610499/.minikube/machines/addons-647801/id_rsa Username:docker}
	W0408 18:18:56.007776  619015 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:55280->192.168.39.113:22: read: connection reset by peer
	I0408 18:18:56.007811  619015 retry.go:31] will retry after 329.609047ms: ssh: handshake failed: read tcp 192.168.39.1:55280->192.168.39.113:22: read: connection reset by peer
	W0408 18:18:56.011539  619015 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:55312->192.168.39.113:22: read: connection reset by peer
	I0408 18:18:56.011569  619015 retry.go:31] will retry after 313.462589ms: ssh: handshake failed: read tcp 192.168.39.1:55312->192.168.39.113:22: read: connection reset by peer
	I0408 18:18:56.444513  619015 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0408 18:18:56.444538  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0408 18:18:56.614868  619015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0408 18:18:56.707494  619015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0408 18:18:56.794871  619015 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0408 18:18:56.794902  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0408 18:18:56.806705  619015 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 18:18:56.806753  619015 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0408 18:18:56.824516  619015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 18:18:56.830734  619015 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0408 18:18:56.830759  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0408 18:18:56.851826  619015 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0408 18:18:56.851852  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0408 18:18:56.853415  619015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0408 18:18:56.907781  619015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0408 18:18:56.910438  619015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0408 18:18:57.007172  619015 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0408 18:18:57.007200  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0408 18:18:57.009797  619015 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0408 18:18:57.009816  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0408 18:18:57.057767  619015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 18:18:57.159020  619015 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0408 18:18:57.159058  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0408 18:18:57.165362  619015 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0408 18:18:57.165390  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0408 18:18:57.169578  619015 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0408 18:18:57.169601  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0408 18:18:57.210344  619015 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0408 18:18:57.210371  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0408 18:18:57.223496  619015 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0408 18:18:57.223549  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0408 18:18:57.425874  619015 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0408 18:18:57.425921  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0408 18:18:57.450634  619015 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0408 18:18:57.450676  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0408 18:18:57.512753  619015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0408 18:18:57.517170  619015 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0408 18:18:57.517200  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0408 18:18:57.651386  619015 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0408 18:18:57.651418  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0408 18:18:57.653196  619015 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0408 18:18:57.653218  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0408 18:18:57.658519  619015 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0408 18:18:57.658542  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0408 18:18:57.764298  619015 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0408 18:18:57.764333  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0408 18:18:57.876133  619015 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0408 18:18:57.876178  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0408 18:18:57.895543  619015 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0408 18:18:57.895587  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0408 18:18:57.910289  619015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0408 18:18:57.962223  619015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0408 18:18:58.033730  619015 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0408 18:18:58.033762  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0408 18:18:58.083049  619015 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 18:18:58.083079  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0408 18:18:58.175985  619015 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0408 18:18:58.176013  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0408 18:18:58.209238  619015 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0408 18:18:58.209267  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0408 18:18:58.212259  619015 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0408 18:18:58.212294  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0408 18:18:58.504190  619015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 18:18:58.540469  619015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0408 18:18:58.577038  619015 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0408 18:18:58.577064  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0408 18:18:58.592380  619015 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0408 18:18:58.592414  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0408 18:18:58.933720  619015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.318806585s)
	I0408 18:18:58.933783  619015 main.go:141] libmachine: Making call to close driver server
	I0408 18:18:58.933799  619015 main.go:141] libmachine: (addons-647801) Calling .Close
	I0408 18:18:58.934185  619015 main.go:141] libmachine: (addons-647801) DBG | Closing plugin on server side
	I0408 18:18:58.934222  619015 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:18:58.934239  619015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:18:58.934258  619015 main.go:141] libmachine: Making call to close driver server
	I0408 18:18:58.934272  619015 main.go:141] libmachine: (addons-647801) Calling .Close
	I0408 18:18:58.934537  619015 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:18:58.934623  619015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:18:58.934625  619015 main.go:141] libmachine: (addons-647801) DBG | Closing plugin on server side
	I0408 18:18:59.017717  619015 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0408 18:18:59.017751  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0408 18:18:59.019747  619015 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0408 18:18:59.019772  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0408 18:18:59.373489  619015 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0408 18:18:59.373521  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0408 18:18:59.431751  619015 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0408 18:18:59.431783  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0408 18:18:59.545930  619015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0408 18:18:59.725064  619015 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0408 18:18:59.725097  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0408 18:18:59.845276  619015 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0408 18:18:59.845302  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0408 18:19:00.052812  619015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0408 18:19:02.533500  619015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.825959463s)
	I0408 18:19:02.533524  619015 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.726776025s)
	I0408 18:19:02.533568  619015 main.go:141] libmachine: Making call to close driver server
	I0408 18:19:02.533584  619015 main.go:141] libmachine: (addons-647801) Calling .Close
	I0408 18:19:02.533583  619015 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.726805663s)
	I0408 18:19:02.533602  619015 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0408 18:19:02.533708  619015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.709148284s)
	I0408 18:19:02.533761  619015 main.go:141] libmachine: Making call to close driver server
	I0408 18:19:02.533788  619015 main.go:141] libmachine: (addons-647801) Calling .Close
	I0408 18:19:02.534024  619015 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:19:02.534051  619015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:19:02.534066  619015 main.go:141] libmachine: Making call to close driver server
	I0408 18:19:02.534075  619015 main.go:141] libmachine: (addons-647801) Calling .Close
	I0408 18:19:02.534335  619015 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:19:02.534356  619015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:19:02.534377  619015 main.go:141] libmachine: (addons-647801) DBG | Closing plugin on server side
	I0408 18:19:02.534737  619015 node_ready.go:35] waiting up to 6m0s for node "addons-647801" to be "Ready" ...
	I0408 18:19:02.535061  619015 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:19:02.535082  619015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:19:02.535097  619015 main.go:141] libmachine: Making call to close driver server
	I0408 18:19:02.535106  619015 main.go:141] libmachine: (addons-647801) Calling .Close
	I0408 18:19:02.535879  619015 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:19:02.535903  619015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:19:02.551982  619015 node_ready.go:49] node "addons-647801" has status "Ready":"True"
	I0408 18:19:02.552020  619015 node_ready.go:38] duration metric: took 17.240498ms for node "addons-647801" to be "Ready" ...
	I0408 18:19:02.552034  619015 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 18:19:02.590081  619015 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-dptwv" in "kube-system" namespace to be "Ready" ...
	I0408 18:19:02.591419  619015 main.go:141] libmachine: Making call to close driver server
	I0408 18:19:02.591440  619015 main.go:141] libmachine: (addons-647801) Calling .Close
	I0408 18:19:02.591783  619015 main.go:141] libmachine: (addons-647801) DBG | Closing plugin on server side
	I0408 18:19:02.591839  619015 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:19:02.591864  619015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:19:02.641437  619015 pod_ready.go:92] pod "coredns-76f75df574-dptwv" in "kube-system" namespace has status "Ready":"True"
	I0408 18:19:02.641465  619015 pod_ready.go:81] duration metric: took 51.352895ms for pod "coredns-76f75df574-dptwv" in "kube-system" namespace to be "Ready" ...
	I0408 18:19:02.641478  619015 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-gp6sd" in "kube-system" namespace to be "Ready" ...
	I0408 18:19:02.869019  619015 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0408 18:19:02.869067  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHHostname
	I0408 18:19:02.872836  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:19:02.873328  619015 main.go:141] libmachine: (addons-647801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:16:c0", ip: ""} in network mk-addons-647801: {Iface:virbr1 ExpiryTime:2024-04-08 19:18:12 +0000 UTC Type:0 Mac:52:54:00:33:16:c0 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-647801 Clientid:01:52:54:00:33:16:c0}
	I0408 18:19:02.873360  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined IP address 192.168.39.113 and MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:19:02.873558  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHPort
	I0408 18:19:02.873790  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHKeyPath
	I0408 18:19:02.873942  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHUsername
	I0408 18:19:02.874085  619015 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18585-610499/.minikube/machines/addons-647801/id_rsa Username:docker}
	I0408 18:19:03.078554  619015 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-647801" context rescaled to 1 replicas
	I0408 18:19:03.228011  619015 pod_ready.go:92] pod "coredns-76f75df574-gp6sd" in "kube-system" namespace has status "Ready":"True"
	I0408 18:19:03.228042  619015 pod_ready.go:81] duration metric: took 586.556009ms for pod "coredns-76f75df574-gp6sd" in "kube-system" namespace to be "Ready" ...
	I0408 18:19:03.228059  619015 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-647801" in "kube-system" namespace to be "Ready" ...
	I0408 18:19:03.283148  619015 pod_ready.go:92] pod "etcd-addons-647801" in "kube-system" namespace has status "Ready":"True"
	I0408 18:19:03.283178  619015 pod_ready.go:81] duration metric: took 55.109509ms for pod "etcd-addons-647801" in "kube-system" namespace to be "Ready" ...
	I0408 18:19:03.283190  619015 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-647801" in "kube-system" namespace to be "Ready" ...
	I0408 18:19:03.314210  619015 pod_ready.go:92] pod "kube-apiserver-addons-647801" in "kube-system" namespace has status "Ready":"True"
	I0408 18:19:03.314244  619015 pod_ready.go:81] duration metric: took 31.044672ms for pod "kube-apiserver-addons-647801" in "kube-system" namespace to be "Ready" ...
	I0408 18:19:03.314260  619015 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-647801" in "kube-system" namespace to be "Ready" ...
	I0408 18:19:03.349357  619015 pod_ready.go:92] pod "kube-controller-manager-addons-647801" in "kube-system" namespace has status "Ready":"True"
	I0408 18:19:03.349387  619015 pod_ready.go:81] duration metric: took 35.118014ms for pod "kube-controller-manager-addons-647801" in "kube-system" namespace to be "Ready" ...
	I0408 18:19:03.349405  619015 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-66qs8" in "kube-system" namespace to be "Ready" ...
	I0408 18:19:03.399465  619015 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0408 18:19:03.701214  619015 addons.go:234] Setting addon gcp-auth=true in "addons-647801"
	I0408 18:19:03.701289  619015 host.go:66] Checking if "addons-647801" exists ...
	I0408 18:19:03.701598  619015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:19:03.701637  619015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:19:03.718842  619015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35833
	I0408 18:19:03.719346  619015 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:19:03.719957  619015 main.go:141] libmachine: Using API Version  1
	I0408 18:19:03.719990  619015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:19:03.720376  619015 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:19:03.720948  619015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:19:03.720983  619015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:19:03.738134  619015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37491
	I0408 18:19:03.738600  619015 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:19:03.739074  619015 main.go:141] libmachine: Using API Version  1
	I0408 18:19:03.739105  619015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:19:03.739455  619015 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:19:03.739679  619015 main.go:141] libmachine: (addons-647801) Calling .GetState
	I0408 18:19:03.741489  619015 main.go:141] libmachine: (addons-647801) Calling .DriverName
	I0408 18:19:03.741723  619015 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0408 18:19:03.741752  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHHostname
	I0408 18:19:03.745068  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:19:03.745483  619015 main.go:141] libmachine: (addons-647801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:16:c0", ip: ""} in network mk-addons-647801: {Iface:virbr1 ExpiryTime:2024-04-08 19:18:12 +0000 UTC Type:0 Mac:52:54:00:33:16:c0 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:addons-647801 Clientid:01:52:54:00:33:16:c0}
	I0408 18:19:03.745511  619015 main.go:141] libmachine: (addons-647801) DBG | domain addons-647801 has defined IP address 192.168.39.113 and MAC address 52:54:00:33:16:c0 in network mk-addons-647801
	I0408 18:19:03.745686  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHPort
	I0408 18:19:03.745871  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHKeyPath
	I0408 18:19:03.746039  619015 main.go:141] libmachine: (addons-647801) Calling .GetSSHUsername
	I0408 18:19:03.746171  619015 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18585-610499/.minikube/machines/addons-647801/id_rsa Username:docker}
	I0408 18:19:03.761549  619015 pod_ready.go:92] pod "kube-proxy-66qs8" in "kube-system" namespace has status "Ready":"True"
	I0408 18:19:03.761574  619015 pod_ready.go:81] duration metric: took 412.161137ms for pod "kube-proxy-66qs8" in "kube-system" namespace to be "Ready" ...
	I0408 18:19:03.761585  619015 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-647801" in "kube-system" namespace to be "Ready" ...
	I0408 18:19:04.158064  619015 pod_ready.go:92] pod "kube-scheduler-addons-647801" in "kube-system" namespace has status "Ready":"True"
	I0408 18:19:04.158099  619015 pod_ready.go:81] duration metric: took 396.505078ms for pod "kube-scheduler-addons-647801" in "kube-system" namespace to be "Ready" ...
	I0408 18:19:04.158113  619015 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-nf6ws" in "kube-system" namespace to be "Ready" ...
	I0408 18:19:06.272973  619015 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-nf6ws" in "kube-system" namespace has status "Ready":"False"
	I0408 18:19:06.381487  619015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.528034571s)
	I0408 18:19:06.381516  619015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.473696719s)
	I0408 18:19:06.381535  619015 main.go:141] libmachine: Making call to close driver server
	I0408 18:19:06.381551  619015 main.go:141] libmachine: (addons-647801) Calling .Close
	I0408 18:19:06.381567  619015 main.go:141] libmachine: Making call to close driver server
	I0408 18:19:06.381582  619015 main.go:141] libmachine: (addons-647801) Calling .Close
	I0408 18:19:06.381646  619015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.471183604s)
	I0408 18:19:06.381703  619015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.323882213s)
	I0408 18:19:06.381744  619015 main.go:141] libmachine: Making call to close driver server
	I0408 18:19:06.381752  619015 main.go:141] libmachine: Making call to close driver server
	I0408 18:19:06.381761  619015 main.go:141] libmachine: (addons-647801) Calling .Close
	I0408 18:19:06.381764  619015 main.go:141] libmachine: (addons-647801) Calling .Close
	I0408 18:19:06.381860  619015 main.go:141] libmachine: (addons-647801) DBG | Closing plugin on server side
	I0408 18:19:06.381904  619015 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:19:06.381912  619015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:19:06.381921  619015 main.go:141] libmachine: Making call to close driver server
	I0408 18:19:06.381929  619015 main.go:141] libmachine: (addons-647801) Calling .Close
	I0408 18:19:06.381932  619015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.869131955s)
	I0408 18:19:06.381953  619015 main.go:141] libmachine: Making call to close driver server
	I0408 18:19:06.381964  619015 main.go:141] libmachine: (addons-647801) Calling .Close
	I0408 18:19:06.382021  619015 main.go:141] libmachine: (addons-647801) DBG | Closing plugin on server side
	I0408 18:19:06.382023  619015 main.go:141] libmachine: (addons-647801) DBG | Closing plugin on server side
	I0408 18:19:06.382039  619015 main.go:141] libmachine: (addons-647801) DBG | Closing plugin on server side
	I0408 18:19:06.382050  619015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.471718544s)
	I0408 18:19:06.382061  619015 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:19:06.382069  619015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:19:06.382077  619015 main.go:141] libmachine: Making call to close driver server
	I0408 18:19:06.382077  619015 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:19:06.382078  619015 main.go:141] libmachine: Making call to close driver server
	I0408 18:19:06.382088  619015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:19:06.382092  619015 main.go:141] libmachine: (addons-647801) Calling .Close
	I0408 18:19:06.382098  619015 main.go:141] libmachine: Making call to close driver server
	I0408 18:19:06.382106  619015 main.go:141] libmachine: (addons-647801) Calling .Close
	I0408 18:19:06.382241  619015 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:19:06.382255  619015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:19:06.382267  619015 addons.go:470] Verifying addon ingress=true in "addons-647801"
	I0408 18:19:06.385927  619015 out.go:177] * Verifying ingress addon...
	I0408 18:19:06.382083  619015 main.go:141] libmachine: (addons-647801) Calling .Close
	I0408 18:19:06.382518  619015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.420261073s)
	I0408 18:19:06.382056  619015 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:19:06.386083  619015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:19:06.386099  619015 main.go:141] libmachine: Making call to close driver server
	I0408 18:19:06.386107  619015 main.go:141] libmachine: (addons-647801) Calling .Close
	I0408 18:19:06.382568  619015 main.go:141] libmachine: (addons-647801) DBG | Closing plugin on server side
	I0408 18:19:06.382590  619015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.878367104s)
	I0408 18:19:06.386201  619015 main.go:141] libmachine: Making call to close driver server
	I0408 18:19:06.386213  619015 main.go:141] libmachine: (addons-647801) Calling .Close
	I0408 18:19:06.382595  619015 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:19:06.386250  619015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:19:06.382613  619015 main.go:141] libmachine: (addons-647801) DBG | Closing plugin on server side
	I0408 18:19:06.382632  619015 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:19:06.386288  619015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:19:06.386305  619015 main.go:141] libmachine: Making call to close driver server
	I0408 18:19:06.386318  619015 main.go:141] libmachine: (addons-647801) Calling .Close
	I0408 18:19:06.386351  619015 main.go:141] libmachine: (addons-647801) DBG | Closing plugin on server side
	I0408 18:19:06.382672  619015 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:19:06.386372  619015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:19:06.386380  619015 main.go:141] libmachine: Making call to close driver server
	I0408 18:19:06.386386  619015 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:19:06.386396  619015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:19:06.382687  619015 main.go:141] libmachine: (addons-647801) DBG | Closing plugin on server side
	I0408 18:19:06.382696  619015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.842191954s)
	W0408 18:19:06.386514  619015 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0408 18:19:06.386387  619015 main.go:141] libmachine: (addons-647801) Calling .Close
	I0408 18:19:06.386537  619015 retry.go:31] will retry after 173.207466ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0408 18:19:06.382751  619015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.836777009s)
	I0408 18:19:06.386566  619015 main.go:141] libmachine: Making call to close driver server
	I0408 18:19:06.386573  619015 main.go:141] libmachine: (addons-647801) Calling .Close
	I0408 18:19:06.386051  619015 main.go:141] libmachine: Making call to close driver server
	I0408 18:19:06.388332  619015 main.go:141] libmachine: (addons-647801) Calling .Close
	I0408 18:19:06.386634  619015 main.go:141] libmachine: (addons-647801) DBG | Closing plugin on server side
	I0408 18:19:06.386658  619015 main.go:141] libmachine: (addons-647801) DBG | Closing plugin on server side
	I0408 18:19:06.386686  619015 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:19:06.388644  619015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:19:06.386692  619015 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:19:06.388698  619015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:19:06.388715  619015 main.go:141] libmachine: Making call to close driver server
	I0408 18:19:06.388728  619015 main.go:141] libmachine: (addons-647801) Calling .Close
	I0408 18:19:06.386704  619015 main.go:141] libmachine: (addons-647801) DBG | Closing plugin on server side
	I0408 18:19:06.386716  619015 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:19:06.388788  619015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:19:06.390350  619015 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-647801 service yakd-dashboard -n yakd-dashboard
	
	I0408 18:19:06.389088  619015 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0408 18:19:06.386854  619015 main.go:141] libmachine: (addons-647801) DBG | Closing plugin on server side
	I0408 18:19:06.386806  619015 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:19:06.389494  619015 main.go:141] libmachine: (addons-647801) DBG | Closing plugin on server side
	I0408 18:19:06.389504  619015 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:19:06.389514  619015 main.go:141] libmachine: (addons-647801) DBG | Closing plugin on server side
	I0408 18:19:06.389517  619015 main.go:141] libmachine: (addons-647801) DBG | Closing plugin on server side
	I0408 18:19:06.389525  619015 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:19:06.389529  619015 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:19:06.392354  619015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:19:06.392371  619015 main.go:141] libmachine: Making call to close driver server
	I0408 18:19:06.392388  619015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:19:06.392391  619015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:19:06.392401  619015 main.go:141] libmachine: Making call to close driver server
	I0408 18:19:06.392410  619015 main.go:141] libmachine: (addons-647801) Calling .Close
	I0408 18:19:06.392392  619015 main.go:141] libmachine: (addons-647801) Calling .Close
	I0408 18:19:06.392374  619015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:19:06.392477  619015 addons.go:470] Verifying addon metrics-server=true in "addons-647801"
	I0408 18:19:06.394023  619015 main.go:141] libmachine: (addons-647801) DBG | Closing plugin on server side
	I0408 18:19:06.394026  619015 main.go:141] libmachine: (addons-647801) DBG | Closing plugin on server side
	I0408 18:19:06.394043  619015 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:19:06.394090  619015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:19:06.394050  619015 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:19:06.394156  619015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:19:06.394165  619015 addons.go:470] Verifying addon registry=true in "addons-647801"
	I0408 18:19:06.395901  619015 out.go:177] * Verifying registry addon...
	I0408 18:19:06.398160  619015 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0408 18:19:06.447182  619015 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0408 18:19:06.447755  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:06.449324  619015 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0408 18:19:06.449343  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:06.469187  619015 main.go:141] libmachine: Making call to close driver server
	I0408 18:19:06.469210  619015 main.go:141] libmachine: (addons-647801) Calling .Close
	I0408 18:19:06.469534  619015 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:19:06.469556  619015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:19:06.560747  619015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0408 18:19:06.906739  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:06.911129  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:07.462909  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:07.466736  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:07.509599  619015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.456729065s)
	I0408 18:19:07.509626  619015 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.767877554s)
	I0408 18:19:07.509667  619015 main.go:141] libmachine: Making call to close driver server
	I0408 18:19:07.509683  619015 main.go:141] libmachine: (addons-647801) Calling .Close
	I0408 18:19:07.511547  619015 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0408 18:19:07.510072  619015 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:19:07.510106  619015 main.go:141] libmachine: (addons-647801) DBG | Closing plugin on server side
	I0408 18:19:07.513301  619015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:19:07.513326  619015 main.go:141] libmachine: Making call to close driver server
	I0408 18:19:07.513336  619015 main.go:141] libmachine: (addons-647801) Calling .Close
	I0408 18:19:07.515619  619015 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0408 18:19:07.513680  619015 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:19:07.513714  619015 main.go:141] libmachine: (addons-647801) DBG | Closing plugin on server side
	I0408 18:19:07.517386  619015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:19:07.517404  619015 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0408 18:19:07.517419  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0408 18:19:07.517424  619015 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-647801"
	I0408 18:19:07.519015  619015 out.go:177] * Verifying csi-hostpath-driver addon...
	I0408 18:19:07.520964  619015 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0408 18:19:07.569821  619015 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0408 18:19:07.569855  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:07.644414  619015 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0408 18:19:07.644443  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0408 18:19:07.754426  619015 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0408 18:19:07.754458  619015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0408 18:19:07.890418  619015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0408 18:19:07.904129  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:07.909768  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:08.031437  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:08.466855  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:08.469517  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:08.527020  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:08.664756  619015 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-nf6ws" in "kube-system" namespace has status "Ready":"False"
	I0408 18:19:08.897677  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:08.902888  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:08.979777  619015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.418976167s)
	I0408 18:19:08.979834  619015 main.go:141] libmachine: Making call to close driver server
	I0408 18:19:08.979846  619015 main.go:141] libmachine: (addons-647801) Calling .Close
	I0408 18:19:08.980165  619015 main.go:141] libmachine: (addons-647801) DBG | Closing plugin on server side
	I0408 18:19:08.980253  619015 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:19:08.980269  619015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:19:08.980283  619015 main.go:141] libmachine: Making call to close driver server
	I0408 18:19:08.980297  619015 main.go:141] libmachine: (addons-647801) Calling .Close
	I0408 18:19:08.980577  619015 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:19:08.980593  619015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:19:09.027191  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:09.420238  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:09.420529  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:09.436569  619015 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.546091687s)
	I0408 18:19:09.436630  619015 main.go:141] libmachine: Making call to close driver server
	I0408 18:19:09.436645  619015 main.go:141] libmachine: (addons-647801) Calling .Close
	I0408 18:19:09.437027  619015 main.go:141] libmachine: (addons-647801) DBG | Closing plugin on server side
	I0408 18:19:09.437094  619015 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:19:09.437113  619015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:19:09.437127  619015 main.go:141] libmachine: Making call to close driver server
	I0408 18:19:09.437140  619015 main.go:141] libmachine: (addons-647801) Calling .Close
	I0408 18:19:09.437406  619015 main.go:141] libmachine: (addons-647801) DBG | Closing plugin on server side
	I0408 18:19:09.437471  619015 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:19:09.437486  619015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:19:09.438729  619015 addons.go:470] Verifying addon gcp-auth=true in "addons-647801"
	I0408 18:19:09.440999  619015 out.go:177] * Verifying gcp-auth addon...
	I0408 18:19:09.443760  619015 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0408 18:19:09.474382  619015 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0408 18:19:09.474412  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:09.541060  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:09.897787  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:09.907375  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:09.950346  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:10.031707  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:10.396767  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:10.404951  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:10.448034  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:10.527887  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:10.671736  619015 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-nf6ws" in "kube-system" namespace has status "Ready":"False"
	I0408 18:19:10.897642  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:10.903337  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:10.947449  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:11.027371  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:11.397754  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:11.403867  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:11.453287  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:11.528443  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:11.898123  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:11.902383  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:11.947932  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:12.035223  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:12.397948  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:12.403046  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:12.447903  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:12.528205  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:12.897222  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:12.904763  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:12.948117  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:13.028041  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:13.581947  619015 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-nf6ws" in "kube-system" namespace has status "Ready":"False"
	I0408 18:19:13.582674  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:13.584087  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:13.586980  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:13.588227  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:13.898527  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:13.902902  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:13.948249  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:14.028282  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:14.400251  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:14.409615  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:14.448565  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:14.533037  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:14.899541  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:14.904169  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:14.947408  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:15.027657  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:15.172122  619015 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-nf6ws" in "kube-system" namespace has status "Ready":"True"
	I0408 18:19:15.172149  619015 pod_ready.go:81] duration metric: took 11.014026989s for pod "nvidia-device-plugin-daemonset-nf6ws" in "kube-system" namespace to be "Ready" ...
	I0408 18:19:15.172182  619015 pod_ready.go:38] duration metric: took 12.620134433s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 18:19:15.172202  619015 api_server.go:52] waiting for apiserver process to appear ...
	I0408 18:19:15.172261  619015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 18:19:15.198566  619015 api_server.go:72] duration metric: took 19.380983561s to wait for apiserver process to appear ...
	I0408 18:19:15.198597  619015 api_server.go:88] waiting for apiserver healthz status ...
	I0408 18:19:15.198624  619015 api_server.go:253] Checking apiserver healthz at https://192.168.39.113:8443/healthz ...
	I0408 18:19:15.203386  619015 api_server.go:279] https://192.168.39.113:8443/healthz returned 200:
	ok
	I0408 18:19:15.206342  619015 api_server.go:141] control plane version: v1.29.3
	I0408 18:19:15.206388  619015 api_server.go:131] duration metric: took 7.781947ms to wait for apiserver health ...
	I0408 18:19:15.206399  619015 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 18:19:15.226305  619015 system_pods.go:59] 18 kube-system pods found
	I0408 18:19:15.226359  619015 system_pods.go:61] "coredns-76f75df574-dptwv" [da9701e8-8413-472b-be3d-8013c668a20c] Running
	I0408 18:19:15.226371  619015 system_pods.go:61] "csi-hostpath-attacher-0" [65a0d5f3-bed0-415e-951a-f939ca6f51b1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0408 18:19:15.226381  619015 system_pods.go:61] "csi-hostpath-resizer-0" [2ec5d557-51a4-48fa-b9fb-0e13e29c7b33] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0408 18:19:15.226397  619015 system_pods.go:61] "csi-hostpathplugin-wd6qb" [50721bd4-3e39-4c8d-84ef-87046bc0c28a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0408 18:19:15.226408  619015 system_pods.go:61] "etcd-addons-647801" [d007c91c-1a2d-4b5d-be46-2be68e796e63] Running
	I0408 18:19:15.226418  619015 system_pods.go:61] "kube-apiserver-addons-647801" [e95b9724-e0b6-4516-bd53-a8f882b95fcc] Running
	I0408 18:19:15.226427  619015 system_pods.go:61] "kube-controller-manager-addons-647801" [db14d28c-af25-4aa6-a392-daad7bb3aba5] Running
	I0408 18:19:15.226439  619015 system_pods.go:61] "kube-ingress-dns-minikube" [05b19148-ecb3-4e18-96df-0ddb68079902] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0408 18:19:15.226448  619015 system_pods.go:61] "kube-proxy-66qs8" [c331417a-c9c6-4193-a8f5-b5de4632c197] Running
	I0408 18:19:15.226455  619015 system_pods.go:61] "kube-scheduler-addons-647801" [b77c2a77-880d-43e5-9b26-0e12ad48ec12] Running
	I0408 18:19:15.226464  619015 system_pods.go:61] "metrics-server-75d6c48ddd-r928h" [c4c3cd08-0a1e-4a92-b4a2-29ef016b8992] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 18:19:15.226472  619015 system_pods.go:61] "nvidia-device-plugin-daemonset-nf6ws" [0870220b-2774-4360-844d-84b35aa706a1] Running
	I0408 18:19:15.226484  619015 system_pods.go:61] "registry-proxy-k57pz" [1f84e77c-c70b-4a43-b5a5-f5e7dce1277a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0408 18:19:15.226500  619015 system_pods.go:61] "registry-r44nw" [2fa023e4-daf2-4594-bbd7-4251b3206eab] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0408 18:19:15.226513  619015 system_pods.go:61] "snapshot-controller-58dbcc7b99-jw6f9" [62df7a7a-1c16-42b0-8160-58eaa5a8d326] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0408 18:19:15.226530  619015 system_pods.go:61] "snapshot-controller-58dbcc7b99-stw4b" [bda4542f-11c7-409e-8e73-e914ac8d731a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0408 18:19:15.226540  619015 system_pods.go:61] "storage-provisioner" [87e2c047-ad0b-4e2f-a9a8-b5bade5a206d] Running
	I0408 18:19:15.226549  619015 system_pods.go:61] "tiller-deploy-7b677967b9-vmzrg" [a4205dc8-51a8-4c10-89dc-fbd5a9fa2346] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0408 18:19:15.226560  619015 system_pods.go:74] duration metric: took 20.153345ms to wait for pod list to return data ...
	I0408 18:19:15.226575  619015 default_sa.go:34] waiting for default service account to be created ...
	I0408 18:19:15.229816  619015 default_sa.go:45] found service account: "default"
	I0408 18:19:15.229836  619015 default_sa.go:55] duration metric: took 3.249794ms for default service account to be created ...
	I0408 18:19:15.229845  619015 system_pods.go:116] waiting for k8s-apps to be running ...
	I0408 18:19:15.242323  619015 system_pods.go:86] 18 kube-system pods found
	I0408 18:19:15.242350  619015 system_pods.go:89] "coredns-76f75df574-dptwv" [da9701e8-8413-472b-be3d-8013c668a20c] Running
	I0408 18:19:15.242361  619015 system_pods.go:89] "csi-hostpath-attacher-0" [65a0d5f3-bed0-415e-951a-f939ca6f51b1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0408 18:19:15.242371  619015 system_pods.go:89] "csi-hostpath-resizer-0" [2ec5d557-51a4-48fa-b9fb-0e13e29c7b33] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0408 18:19:15.242379  619015 system_pods.go:89] "csi-hostpathplugin-wd6qb" [50721bd4-3e39-4c8d-84ef-87046bc0c28a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0408 18:19:15.242384  619015 system_pods.go:89] "etcd-addons-647801" [d007c91c-1a2d-4b5d-be46-2be68e796e63] Running
	I0408 18:19:15.242389  619015 system_pods.go:89] "kube-apiserver-addons-647801" [e95b9724-e0b6-4516-bd53-a8f882b95fcc] Running
	I0408 18:19:15.242393  619015 system_pods.go:89] "kube-controller-manager-addons-647801" [db14d28c-af25-4aa6-a392-daad7bb3aba5] Running
	I0408 18:19:15.242402  619015 system_pods.go:89] "kube-ingress-dns-minikube" [05b19148-ecb3-4e18-96df-0ddb68079902] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0408 18:19:15.242407  619015 system_pods.go:89] "kube-proxy-66qs8" [c331417a-c9c6-4193-a8f5-b5de4632c197] Running
	I0408 18:19:15.242411  619015 system_pods.go:89] "kube-scheduler-addons-647801" [b77c2a77-880d-43e5-9b26-0e12ad48ec12] Running
	I0408 18:19:15.242418  619015 system_pods.go:89] "metrics-server-75d6c48ddd-r928h" [c4c3cd08-0a1e-4a92-b4a2-29ef016b8992] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 18:19:15.242422  619015 system_pods.go:89] "nvidia-device-plugin-daemonset-nf6ws" [0870220b-2774-4360-844d-84b35aa706a1] Running
	I0408 18:19:15.242441  619015 system_pods.go:89] "registry-proxy-k57pz" [1f84e77c-c70b-4a43-b5a5-f5e7dce1277a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0408 18:19:15.242453  619015 system_pods.go:89] "registry-r44nw" [2fa023e4-daf2-4594-bbd7-4251b3206eab] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0408 18:19:15.242459  619015 system_pods.go:89] "snapshot-controller-58dbcc7b99-jw6f9" [62df7a7a-1c16-42b0-8160-58eaa5a8d326] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0408 18:19:15.242465  619015 system_pods.go:89] "snapshot-controller-58dbcc7b99-stw4b" [bda4542f-11c7-409e-8e73-e914ac8d731a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0408 18:19:15.242470  619015 system_pods.go:89] "storage-provisioner" [87e2c047-ad0b-4e2f-a9a8-b5bade5a206d] Running
	I0408 18:19:15.242476  619015 system_pods.go:89] "tiller-deploy-7b677967b9-vmzrg" [a4205dc8-51a8-4c10-89dc-fbd5a9fa2346] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0408 18:19:15.242485  619015 system_pods.go:126] duration metric: took 12.634028ms to wait for k8s-apps to be running ...
	I0408 18:19:15.242492  619015 system_svc.go:44] waiting for kubelet service to be running ....
	I0408 18:19:15.242538  619015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 18:19:15.267812  619015 system_svc.go:56] duration metric: took 25.304525ms WaitForService to wait for kubelet
	I0408 18:19:15.267850  619015 kubeadm.go:576] duration metric: took 19.450274007s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 18:19:15.267873  619015 node_conditions.go:102] verifying NodePressure condition ...
	I0408 18:19:15.271142  619015 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 18:19:15.271171  619015 node_conditions.go:123] node cpu capacity is 2
	I0408 18:19:15.271186  619015 node_conditions.go:105] duration metric: took 3.307429ms to run NodePressure ...
	I0408 18:19:15.271200  619015 start.go:240] waiting for startup goroutines ...
	I0408 18:19:15.398224  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:15.402242  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:15.448363  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:15.527225  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:15.897448  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:15.905106  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:15.947328  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:16.030295  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:16.397529  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:16.409848  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:16.448231  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:16.527616  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:16.897703  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:16.903634  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:16.947992  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:17.028625  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:17.397277  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:17.403649  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:17.450520  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:17.563723  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:17.975485  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:17.976071  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:17.980143  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:18.028707  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:18.397610  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:18.403414  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:18.448314  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:18.526570  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:18.898103  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:18.903409  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:18.948480  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:19.027113  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:19.397326  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:19.403082  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:19.448583  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:19.526986  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:19.900421  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:19.904930  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:19.948405  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:20.028464  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:20.399540  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:20.402769  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:20.448947  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:20.528432  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:20.902985  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:20.910491  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:20.947883  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:21.027222  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:21.400685  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:21.403636  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:21.448652  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:21.528299  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:21.897959  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:21.903339  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:21.948678  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:22.028233  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:22.396808  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:22.404396  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:22.448210  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:22.529699  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:22.900839  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:22.907866  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:22.948124  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:23.031775  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:23.398231  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:23.402346  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:23.448741  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:23.529144  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:23.916960  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:23.922635  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:23.953559  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:24.028458  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:24.396937  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:24.402192  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:24.450262  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:24.526211  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:24.903960  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:24.923902  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:24.951630  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:25.026854  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:25.397814  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:25.407080  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:25.448477  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:25.526891  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:25.898295  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:25.914375  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:25.947518  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:26.027082  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:26.397049  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:26.405970  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:26.449359  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:26.527648  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:26.896545  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:26.910943  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:26.953965  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:27.028049  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:27.397663  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:27.405313  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:27.447701  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:27.534562  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:27.898725  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:27.915261  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:27.947357  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:28.026448  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:28.397956  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:28.404478  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:28.447842  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:28.526897  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:28.899617  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:28.910299  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:28.948277  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:29.027896  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:29.398644  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:29.403980  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:29.447984  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:29.527502  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:29.897134  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:29.909429  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:29.970048  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:30.028213  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:30.397537  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:30.402658  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:30.447917  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:30.529076  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:30.897962  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:30.903854  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:30.948494  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:31.029505  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:31.397747  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:31.404094  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:31.448106  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:31.529182  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:31.897676  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:31.903373  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:31.947784  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:32.027233  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:32.398738  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:32.404004  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:32.447867  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:32.527045  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:32.897362  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:32.903485  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:32.947898  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:33.027652  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:33.397317  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:33.403155  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:33.450907  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:33.528059  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:33.896742  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:33.905888  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:33.961327  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:34.032867  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:34.397431  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:34.403311  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:34.448985  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:34.528520  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:34.897785  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:34.904827  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:34.948088  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:35.027814  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:35.397644  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:35.405172  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:35.448706  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:35.527883  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:35.897519  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:35.904720  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:35.970671  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:36.027102  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:36.399448  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:36.402775  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:36.448729  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:36.526903  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:36.897559  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:36.903561  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:36.947865  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:37.028220  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:37.398878  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:37.403902  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:37.447852  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:37.526873  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:37.898581  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:37.904639  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:37.947756  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:38.027702  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:38.398745  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:38.403766  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:38.448242  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:38.528927  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:38.898114  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:38.903289  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:38.947027  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:39.044139  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:39.396841  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:39.402239  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:39.449247  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:39.526438  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:39.897239  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:39.902706  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:39.949085  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:40.029823  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:40.398074  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:40.402338  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:40.449744  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:40.529365  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:40.896636  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:40.903174  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:40.952837  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:41.033414  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:41.674222  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:41.674922  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:41.679644  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:41.679756  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:41.896654  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:41.903035  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:41.949485  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:42.028769  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:42.407920  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:42.410569  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:42.449614  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:42.528137  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:42.897611  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:42.911579  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:42.954196  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:43.033774  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:43.397342  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:43.402915  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:43.448099  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:43.528601  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:43.901653  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:43.915185  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:43.948051  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:44.047128  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:44.399825  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:44.405984  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:44.448496  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:44.527587  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:44.938179  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:44.943314  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:44.948682  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:45.040351  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:45.398291  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:45.402088  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:45.448163  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:45.528357  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:46.113315  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:46.116076  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:46.116898  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:46.122332  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:46.408608  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:46.411649  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:46.448965  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:46.531960  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:46.897385  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:46.903004  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:19:46.950721  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:47.029674  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:47.397574  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:47.403585  619015 kapi.go:107] duration metric: took 41.005427805s to wait for kubernetes.io/minikube-addons=registry ...
	I0408 18:19:47.449708  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:47.527163  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:47.900551  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:47.949626  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:48.038640  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:48.398370  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:48.449312  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:48.527227  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:48.897608  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:48.950329  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:49.026917  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:49.397611  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:49.447277  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:49.527116  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:50.013413  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:50.014000  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:50.037773  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:50.397585  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:50.449100  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:50.528729  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:50.897805  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:50.948196  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:51.029443  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:51.397706  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:51.447999  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:51.529808  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:51.898955  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:51.948545  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:52.028156  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:52.399583  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:52.448245  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:52.528535  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:52.898160  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:52.948253  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:53.027786  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:53.403988  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:53.450253  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:53.530840  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:53.900624  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:53.948963  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:54.034261  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:54.398630  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:54.448156  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:54.530964  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:54.897082  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:54.952512  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:55.034084  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:55.399301  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:55.448838  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:55.529476  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:55.897169  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:55.948651  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:56.027875  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:56.397372  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:56.451818  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:56.545058  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:56.897518  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:56.947038  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:57.028031  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:57.397938  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:57.449174  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:57.529112  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:57.898668  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:57.947967  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:58.028959  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:58.402125  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:58.447846  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:58.527745  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:58.898327  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:58.975585  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:59.030433  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:59.399218  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:59.448462  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:19:59.526973  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:19:59.897242  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:19:59.947709  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:20:00.029070  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:20:00.397839  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:20:00.447968  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:20:00.528681  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:20:00.898864  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:20:00.950002  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:20:01.027056  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:20:01.396910  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:20:01.448767  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:20:01.527826  619015 kapi.go:107] duration metric: took 54.006859669s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0408 18:20:01.897125  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:20:01.948642  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:20:02.398201  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:20:02.448465  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:20:02.897595  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:20:02.948451  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:20:03.398406  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:20:03.448515  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:20:03.897680  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:20:03.947434  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:20:04.397524  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:20:04.588588  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:20:04.897125  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:20:04.948139  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:20:05.398991  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:20:05.450115  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:20:05.897284  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:20:05.947560  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:20:06.398353  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:20:06.448656  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:20:06.897274  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:20:06.947902  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:20:07.399084  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:20:07.447871  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:20:07.897649  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:20:07.948442  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:20:08.397774  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:20:08.447955  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:20:08.897465  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:20:08.947741  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:20:09.397530  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:20:09.448662  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:20:09.897322  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:20:09.948473  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:20:10.401478  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:20:10.447508  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:20:10.897816  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:20:10.948251  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:20:11.397669  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:20:11.448667  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:20:11.897897  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:20:11.947995  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:20:12.398367  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:20:12.447964  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:20:12.897458  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:20:12.947978  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:20:13.906869  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:20:13.909216  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:20:13.914571  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:20:13.949842  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:20:14.397451  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:20:14.449396  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:20:14.896325  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:20:14.948473  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:20:15.397062  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:20:15.447751  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:20:15.899901  619015 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:20:15.950778  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:20:16.408654  619015 kapi.go:107] duration metric: took 1m10.019572923s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0408 18:20:16.449099  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:20:16.951569  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:20:17.450015  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:20:17.949285  619015 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:20:18.450361  619015 kapi.go:107] duration metric: took 1m9.006597364s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0408 18:20:18.452296  619015 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-647801 cluster.
	I0408 18:20:18.453655  619015 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0408 18:20:18.455087  619015 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0408 18:20:18.456540  619015 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, storage-provisioner-rancher, cloud-spanner, ingress-dns, helm-tiller, metrics-server, yakd, inspektor-gadget, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0408 18:20:18.458070  619015 addons.go:505] duration metric: took 1m22.640488894s for enable addons: enabled=[nvidia-device-plugin storage-provisioner storage-provisioner-rancher cloud-spanner ingress-dns helm-tiller metrics-server yakd inspektor-gadget default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0408 18:20:18.458118  619015 start.go:245] waiting for cluster config update ...
	I0408 18:20:18.458143  619015 start.go:254] writing updated cluster config ...
	I0408 18:20:18.458408  619015 ssh_runner.go:195] Run: rm -f paused
	I0408 18:20:18.511777  619015 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0408 18:20:18.513617  619015 out.go:177] * Done! kubectl is now configured to use "addons-647801" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	3a2c35afeb47c       7373e995f4086       18 seconds ago       Running             headlamp                                 0                   3362cecc3e54a       headlamp-5b77dbd7c4-mpdm9
	f88bf3a7e0f8b       dd1b12fcb6097       31 seconds ago       Running             hello-world-app                          0                   c639badf83d64       hello-world-app-5d77478584-r9njc
	307e0f0ccc4d9       a416a98b71e22       32 seconds ago       Exited              helper-pod                               0                   22e2fc9bb1abb       helper-pod-delete-pvc-98f67a28-2944-4b09-a5b8-08ff2d55447a
	e12eb97eabd07       ba5dc23f65d4c       36 seconds ago       Exited              busybox                                  0                   d67b65307f6cb       test-local-path
	f8523ff5d2821       e289a478ace02       42 seconds ago       Running             nginx                                    0                   8771b0f32197a       nginx
	7d79f5277464b       db2fc13d44d50       59 seconds ago       Running             gcp-auth                                 0                   5edb30f6f7aa8       gcp-auth-7d69788767-szpkq
	77a3b1592ef58       ffcc66479b5ba       About a minute ago   Exited              controller                               0                   4c00237ae0835       ingress-nginx-controller-65496f9567-vxhlf
	5be475697aa38       738351fd438f0       About a minute ago   Running             csi-snapshotter                          0                   da5c6f5ebe74f       csi-hostpathplugin-wd6qb
	bb0620fb426f9       931dbfd16f87c       About a minute ago   Running             csi-provisioner                          0                   da5c6f5ebe74f       csi-hostpathplugin-wd6qb
	46b166f0b98ca       e899260153aed       About a minute ago   Running             liveness-probe                           0                   da5c6f5ebe74f       csi-hostpathplugin-wd6qb
	097f217422be2       e255e073c508c       About a minute ago   Running             hostpath                                 0                   da5c6f5ebe74f       csi-hostpathplugin-wd6qb
	fd509cf43718f       88ef14a257f42       About a minute ago   Running             node-driver-registrar                    0                   da5c6f5ebe74f       csi-hostpathplugin-wd6qb
	aa2730d8afe99       19a639eda60f0       About a minute ago   Running             csi-resizer                              0                   55d88e9a90e6f       csi-hostpath-resizer-0
	7978baf17c4f5       a1ed5895ba635       About a minute ago   Running             csi-external-health-monitor-controller   0                   da5c6f5ebe74f       csi-hostpathplugin-wd6qb
	f3b9ef71b5fa5       b29d748098e32       About a minute ago   Exited              patch                                    0                   13d4d6780a183       ingress-nginx-admission-patch-9mwks
	e8804881f7710       b29d748098e32       About a minute ago   Exited              create                                   0                   2492a624d458e       ingress-nginx-admission-create-8v8rr
	a77f970717cc9       59cbb42146a37       About a minute ago   Running             csi-attacher                             0                   161c78139256a       csi-hostpath-attacher-0
	3c31d7d872703       31de47c733c91       About a minute ago   Running             yakd                                     0                   566d9782b251f       yakd-dashboard-9947fc6bf-6bth7
	5a9344b76c925       6e38f40d628db       2 minutes ago        Running             storage-provisioner                      0                   056fb37ef5c88       storage-provisioner
	ca72a2f3bf56e       cbb01a7bd410d       2 minutes ago        Running             coredns                                  0                   680905355e0b9       coredns-76f75df574-dptwv
	c08238c3eed00       a1d263b5dc5b0       2 minutes ago        Running             kube-proxy                               0                   cba092ccad848       kube-proxy-66qs8
	854157b3fcf90       8c390d98f50c0       2 minutes ago        Running             kube-scheduler                           0                   8a903be0c0685       kube-scheduler-addons-647801
	ea9620ff06d1d       39f995c9f1996       2 minutes ago        Running             kube-apiserver                           0                   41ad8e7681a05       kube-apiserver-addons-647801
	affe99c11d9b3       3861cfcd7c04c       2 minutes ago        Running             etcd                                     0                   0f96fff8ed5c7       etcd-addons-647801
	e0890d0ccaa9f       6052a25da3f97       2 minutes ago        Running             kube-controller-manager                  0                   c3a02ecb42b7d       kube-controller-manager-addons-647801
	
	
	==> containerd <==
	Apr 08 18:21:16 addons-647801 containerd[653]: time="2024-04-08T18:21:16.582879720Z" level=info msg="shim disconnected" id=65d9d55c75aef7e7ce6cd33287b8945d755ca57decdb2fb8b63bc25590d00269 namespace=k8s.io
	Apr 08 18:21:16 addons-647801 containerd[653]: time="2024-04-08T18:21:16.582955359Z" level=warning msg="cleaning up after shim disconnected" id=65d9d55c75aef7e7ce6cd33287b8945d755ca57decdb2fb8b63bc25590d00269 namespace=k8s.io
	Apr 08 18:21:16 addons-647801 containerd[653]: time="2024-04-08T18:21:16.582967185Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Apr 08 18:21:16 addons-647801 containerd[653]: time="2024-04-08T18:21:16.607360653Z" level=info msg="StopContainer for \"aac47f897e228db0da34b80f0c53a626665f15859ef90b8e6e31c17544dd97a8\" returns successfully"
	Apr 08 18:21:16 addons-647801 containerd[653]: time="2024-04-08T18:21:16.608332230Z" level=info msg="StopPodSandbox for \"c3cee65bebf6635e5d6bb7d3d11327008dd3a0104b271f2fcd5f92708ebcb49f\""
	Apr 08 18:21:16 addons-647801 containerd[653]: time="2024-04-08T18:21:16.608697676Z" level=info msg="Container to stop \"aac47f897e228db0da34b80f0c53a626665f15859ef90b8e6e31c17544dd97a8\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Apr 08 18:21:16 addons-647801 containerd[653]: time="2024-04-08T18:21:16.641986954Z" level=info msg="StopContainer for \"65d9d55c75aef7e7ce6cd33287b8945d755ca57decdb2fb8b63bc25590d00269\" returns successfully"
	Apr 08 18:21:16 addons-647801 containerd[653]: time="2024-04-08T18:21:16.642714756Z" level=info msg="StopPodSandbox for \"b06fcfcf73adadad84c1c78fa392698619c07459ccc30df82fe77598f8a2f544\""
	Apr 08 18:21:16 addons-647801 containerd[653]: time="2024-04-08T18:21:16.642848155Z" level=info msg="Container to stop \"65d9d55c75aef7e7ce6cd33287b8945d755ca57decdb2fb8b63bc25590d00269\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Apr 08 18:21:16 addons-647801 containerd[653]: time="2024-04-08T18:21:16.686343643Z" level=info msg="shim disconnected" id=c3cee65bebf6635e5d6bb7d3d11327008dd3a0104b271f2fcd5f92708ebcb49f namespace=k8s.io
	Apr 08 18:21:16 addons-647801 containerd[653]: time="2024-04-08T18:21:16.686817683Z" level=warning msg="cleaning up after shim disconnected" id=c3cee65bebf6635e5d6bb7d3d11327008dd3a0104b271f2fcd5f92708ebcb49f namespace=k8s.io
	Apr 08 18:21:16 addons-647801 containerd[653]: time="2024-04-08T18:21:16.686883598Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Apr 08 18:21:16 addons-647801 containerd[653]: time="2024-04-08T18:21:16.712921672Z" level=info msg="shim disconnected" id=b06fcfcf73adadad84c1c78fa392698619c07459ccc30df82fe77598f8a2f544 namespace=k8s.io
	Apr 08 18:21:16 addons-647801 containerd[653]: time="2024-04-08T18:21:16.713625592Z" level=warning msg="cleaning up after shim disconnected" id=b06fcfcf73adadad84c1c78fa392698619c07459ccc30df82fe77598f8a2f544 namespace=k8s.io
	Apr 08 18:21:16 addons-647801 containerd[653]: time="2024-04-08T18:21:16.713675935Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Apr 08 18:21:16 addons-647801 containerd[653]: time="2024-04-08T18:21:16.816252708Z" level=info msg="TearDown network for sandbox \"c3cee65bebf6635e5d6bb7d3d11327008dd3a0104b271f2fcd5f92708ebcb49f\" successfully"
	Apr 08 18:21:16 addons-647801 containerd[653]: time="2024-04-08T18:21:16.816315584Z" level=info msg="StopPodSandbox for \"c3cee65bebf6635e5d6bb7d3d11327008dd3a0104b271f2fcd5f92708ebcb49f\" returns successfully"
	Apr 08 18:21:16 addons-647801 containerd[653]: time="2024-04-08T18:21:16.836022406Z" level=info msg="TearDown network for sandbox \"b06fcfcf73adadad84c1c78fa392698619c07459ccc30df82fe77598f8a2f544\" successfully"
	Apr 08 18:21:16 addons-647801 containerd[653]: time="2024-04-08T18:21:16.836135305Z" level=info msg="StopPodSandbox for \"b06fcfcf73adadad84c1c78fa392698619c07459ccc30df82fe77598f8a2f544\" returns successfully"
	Apr 08 18:21:16 addons-647801 containerd[653]: time="2024-04-08T18:21:16.949772697Z" level=info msg="RemoveContainer for \"aac47f897e228db0da34b80f0c53a626665f15859ef90b8e6e31c17544dd97a8\""
	Apr 08 18:21:16 addons-647801 containerd[653]: time="2024-04-08T18:21:16.965848999Z" level=info msg="RemoveContainer for \"aac47f897e228db0da34b80f0c53a626665f15859ef90b8e6e31c17544dd97a8\" returns successfully"
	Apr 08 18:21:16 addons-647801 containerd[653]: time="2024-04-08T18:21:16.966715854Z" level=error msg="ContainerStatus for \"aac47f897e228db0da34b80f0c53a626665f15859ef90b8e6e31c17544dd97a8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aac47f897e228db0da34b80f0c53a626665f15859ef90b8e6e31c17544dd97a8\": not found"
	Apr 08 18:21:16 addons-647801 containerd[653]: time="2024-04-08T18:21:16.974948242Z" level=info msg="RemoveContainer for \"65d9d55c75aef7e7ce6cd33287b8945d755ca57decdb2fb8b63bc25590d00269\""
	Apr 08 18:21:16 addons-647801 containerd[653]: time="2024-04-08T18:21:16.983734600Z" level=info msg="RemoveContainer for \"65d9d55c75aef7e7ce6cd33287b8945d755ca57decdb2fb8b63bc25590d00269\" returns successfully"
	Apr 08 18:21:16 addons-647801 containerd[653]: time="2024-04-08T18:21:16.984460334Z" level=error msg="ContainerStatus for \"65d9d55c75aef7e7ce6cd33287b8945d755ca57decdb2fb8b63bc25590d00269\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"65d9d55c75aef7e7ce6cd33287b8945d755ca57decdb2fb8b63bc25590d00269\": not found"
	
	
	==> coredns [ca72a2f3bf56e38d8fe36b365bdd6363f8e10c0d7bccbcddaa3585343e588d52] <==
	[INFO] 10.244.0.21:56254 - 31751 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000134393s
	[INFO] 10.244.0.21:42418 - 58097 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000100179s
	[INFO] 10.244.0.21:42418 - 1668 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000205161s
	[INFO] 10.244.0.21:42418 - 12733 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000118089s
	[INFO] 10.244.0.21:42418 - 3145 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000105106s
	[INFO] 10.244.0.21:56254 - 50552 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000091063s
	[INFO] 10.244.0.21:56254 - 54394 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000111562s
	[INFO] 10.244.0.21:56254 - 34691 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000074442s
	[INFO] 10.244.0.21:56254 - 32576 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000086738s
	[INFO] 10.244.0.21:56254 - 32596 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000068097s
	[INFO] 10.244.0.21:56254 - 8670 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000075141s
	[INFO] 10.244.0.21:51483 - 61100 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000119748s
	[INFO] 10.244.0.21:51483 - 62136 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000075365s
	[INFO] 10.244.0.21:47588 - 61388 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000030161s
	[INFO] 10.244.0.21:47588 - 49394 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000082192s
	[INFO] 10.244.0.21:47588 - 56814 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000210826s
	[INFO] 10.244.0.21:47588 - 26179 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000096413s
	[INFO] 10.244.0.21:47588 - 4198 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000052613s
	[INFO] 10.244.0.21:51483 - 56745 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000023716s
	[INFO] 10.244.0.21:51483 - 12919 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000062484s
	[INFO] 10.244.0.21:47588 - 48868 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000067996s
	[INFO] 10.244.0.21:51483 - 25394 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000048236s
	[INFO] 10.244.0.21:47588 - 48264 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000079931s
	[INFO] 10.244.0.21:51483 - 24035 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000041802s
	[INFO] 10.244.0.21:51483 - 8721 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000177836s
	
	
	==> describe nodes <==
	Name:               addons-647801
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-647801
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9de8f0b190a4305b11b3a925ec3e499cf3fc021
	                    minikube.k8s.io/name=addons-647801
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_08T18_18_42_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-647801
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-647801"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Apr 2024 18:18:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-647801
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Apr 2024 18:21:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Apr 2024 18:21:15 +0000   Mon, 08 Apr 2024 18:18:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Apr 2024 18:21:15 +0000   Mon, 08 Apr 2024 18:18:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Apr 2024 18:21:15 +0000   Mon, 08 Apr 2024 18:18:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Apr 2024 18:21:15 +0000   Mon, 08 Apr 2024 18:18:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.113
	  Hostname:    addons-647801
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 80f4b37546644b369118ad6e9268b537
	  System UUID:                80f4b375-4664-4b36-9118-ad6e9268b537
	  Boot ID:                    a84b79f3-acae-4ce7-b370-ddc66d32115e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.14
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-r9njc         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	  gcp-auth                    gcp-auth-7d69788767-szpkq                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m8s
	  headlamp                    headlamp-5b77dbd7c4-mpdm9                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  kube-system                 coredns-76f75df574-dptwv                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m22s
	  kube-system                 csi-hostpath-attacher-0                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m10s
	  kube-system                 csi-hostpath-resizer-0                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m10s
	  kube-system                 csi-hostpathplugin-wd6qb                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m10s
	  kube-system                 etcd-addons-647801                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m35s
	  kube-system                 kube-apiserver-addons-647801             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m36s
	  kube-system                 kube-controller-manager-addons-647801    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m35s
	  kube-system                 kube-proxy-66qs8                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m22s
	  kube-system                 kube-scheduler-addons-647801             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m36s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m16s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-6bth7           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     2m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m20s                  kube-proxy       
	  Normal  Starting                 2m43s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m42s (x8 over 2m43s)  kubelet          Node addons-647801 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m42s (x8 over 2m43s)  kubelet          Node addons-647801 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m42s (x7 over 2m43s)  kubelet          Node addons-647801 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m35s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m35s                  kubelet          Node addons-647801 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m35s                  kubelet          Node addons-647801 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m35s                  kubelet          Node addons-647801 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m35s                  kubelet          Node addons-647801 status is now: NodeReady
	  Normal  RegisteredNode           2m23s                  node-controller  Node addons-647801 event: Registered Node addons-647801 in Controller
	
	
	==> dmesg <==
	[  +5.003451] systemd-fstab-generator[865]: Ignoring "noauto" option for root device
	[  +0.063339] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.727550] systemd-fstab-generator[1239]: Ignoring "noauto" option for root device
	[  +0.077301] kauditd_printk_skb: 69 callbacks suppressed
	[ +13.896542] systemd-fstab-generator[1439]: Ignoring "noauto" option for root device
	[  +0.126171] kauditd_printk_skb: 21 callbacks suppressed
	[Apr 8 18:19] kauditd_printk_skb: 111 callbacks suppressed
	[  +5.057670] kauditd_printk_skb: 83 callbacks suppressed
	[  +8.087617] kauditd_printk_skb: 141 callbacks suppressed
	[ +10.962155] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.141574] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.454427] kauditd_printk_skb: 29 callbacks suppressed
	[  +7.385524] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.714838] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.390034] kauditd_printk_skb: 81 callbacks suppressed
	[Apr 8 18:20] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.495582] kauditd_printk_skb: 22 callbacks suppressed
	[  +9.506723] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.056160] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.564246] kauditd_printk_skb: 95 callbacks suppressed
	[  +6.189786] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.334409] kauditd_printk_skb: 48 callbacks suppressed
	[  +6.838563] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.350417] kauditd_printk_skb: 16 callbacks suppressed
	[Apr 8 18:21] kauditd_printk_skb: 7 callbacks suppressed
	
	
	==> etcd [affe99c11d9b3ad743bd6071f7fa32625961284e681e5157a2c17556d87bc0d9] <==
	{"level":"warn","ts":"2024-04-08T18:19:46.105681Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"212.938164ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14077"}
	{"level":"info","ts":"2024-04-08T18:19:46.10582Z","caller":"traceutil/trace.go:171","msg":"trace[1599477749] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1018; }","duration":"213.04493ms","start":"2024-04-08T18:19:45.892697Z","end":"2024-04-08T18:19:46.105742Z","steps":["trace[1599477749] 'agreement among raft nodes before linearized reading'  (duration: 212.814856ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-08T18:19:46.107249Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"209.340097ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85584"}
	{"level":"info","ts":"2024-04-08T18:19:46.107511Z","caller":"traceutil/trace.go:171","msg":"trace[995638821] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1018; }","duration":"209.63129ms","start":"2024-04-08T18:19:45.897871Z","end":"2024-04-08T18:19:46.107503Z","steps":["trace[995638821] 'agreement among raft nodes before linearized reading'  (duration: 208.295319ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-08T18:19:46.108083Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.84506ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11167"}
	{"level":"info","ts":"2024-04-08T18:19:46.108138Z","caller":"traceutil/trace.go:171","msg":"trace[154292553] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1018; }","duration":"163.919333ms","start":"2024-04-08T18:19:45.944212Z","end":"2024-04-08T18:19:46.108132Z","steps":["trace[154292553] 'agreement among raft nodes before linearized reading'  (duration: 163.811777ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-08T18:19:50.005714Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.402528ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14077"}
	{"level":"info","ts":"2024-04-08T18:19:50.005797Z","caller":"traceutil/trace.go:171","msg":"trace[1765295127] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1044; }","duration":"113.491909ms","start":"2024-04-08T18:19:49.892288Z","end":"2024-04-08T18:19:50.00578Z","steps":["trace[1765295127] 'range keys from in-memory index tree'  (duration: 113.229063ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-08T18:19:50.006206Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"229.567812ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/ingress-nginx/ingress-nginx-admission\" ","response":"range_response_count:1 size:1823"}
	{"level":"info","ts":"2024-04-08T18:19:50.006402Z","caller":"traceutil/trace.go:171","msg":"trace[485679215] range","detail":"{range_begin:/registry/secrets/ingress-nginx/ingress-nginx-admission; range_end:; response_count:1; response_revision:1044; }","duration":"229.768349ms","start":"2024-04-08T18:19:49.776624Z","end":"2024-04-08T18:19:50.006393Z","steps":["trace[485679215] 'range keys from in-memory index tree'  (duration: 229.246571ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-08T18:19:52.345441Z","caller":"traceutil/trace.go:171","msg":"trace[1146554726] transaction","detail":"{read_only:false; response_revision:1059; number_of_response:1; }","duration":"106.895102ms","start":"2024-04-08T18:19:52.238525Z","end":"2024-04-08T18:19:52.34542Z","steps":["trace[1146554726] 'process raft request'  (duration: 106.731429ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-08T18:20:04.582546Z","caller":"traceutil/trace.go:171","msg":"trace[965512852] linearizableReadLoop","detail":"{readStateIndex:1189; appliedIndex:1188; }","duration":"139.524572ms","start":"2024-04-08T18:20:04.443007Z","end":"2024-04-08T18:20:04.582532Z","steps":["trace[965512852] 'read index received'  (duration: 139.340424ms)","trace[965512852] 'applied index is now lower than readState.Index'  (duration: 183.719µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-08T18:20:04.583034Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.011703ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11487"}
	{"level":"info","ts":"2024-04-08T18:20:04.583144Z","caller":"traceutil/trace.go:171","msg":"trace[233081475] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1156; }","duration":"140.154687ms","start":"2024-04-08T18:20:04.442981Z","end":"2024-04-08T18:20:04.583136Z","steps":["trace[233081475] 'agreement among raft nodes before linearized reading'  (duration: 139.95104ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-08T18:20:04.583392Z","caller":"traceutil/trace.go:171","msg":"trace[1940855536] transaction","detail":"{read_only:false; response_revision:1156; number_of_response:1; }","duration":"160.891979ms","start":"2024-04-08T18:20:04.422487Z","end":"2024-04-08T18:20:04.583379Z","steps":["trace[1940855536] 'process raft request'  (duration: 159.961848ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-08T18:20:13.892037Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":5115964652420396126,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-04-08T18:20:13.895279Z","caller":"traceutil/trace.go:171","msg":"trace[1378859153] transaction","detail":"{read_only:false; response_revision:1171; number_of_response:1; }","duration":"531.483357ms","start":"2024-04-08T18:20:13.363679Z","end":"2024-04-08T18:20:13.895163Z","steps":["trace[1378859153] 'process raft request'  (duration: 530.295553ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-08T18:20:13.895772Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-08T18:20:13.363665Z","time spent":"531.87041ms","remote":"127.0.0.1:38832","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":793,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/gadget/gadget-4h4hj.17c460f130a40cba\" mod_revision:975 > success:<request_put:<key:\"/registry/events/gadget/gadget-4h4hj.17c460f130a40cba\" value_size:722 lease:5115964652420395568 >> failure:<request_range:<key:\"/registry/events/gadget/gadget-4h4hj.17c460f130a40cba\" > >"}
	{"level":"info","ts":"2024-04-08T18:20:13.900714Z","caller":"traceutil/trace.go:171","msg":"trace[199806907] linearizableReadLoop","detail":"{readStateIndex:1206; appliedIndex:1206; }","duration":"508.83549ms","start":"2024-04-08T18:20:13.391867Z","end":"2024-04-08T18:20:13.900703Z","steps":["trace[199806907] 'read index received'  (duration: 508.830721ms)","trace[199806907] 'applied index is now lower than readState.Index'  (duration: 3.846µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-08T18:20:13.900887Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"456.039196ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11487"}
	{"level":"info","ts":"2024-04-08T18:20:13.900935Z","caller":"traceutil/trace.go:171","msg":"trace[302238156] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1171; }","duration":"456.120044ms","start":"2024-04-08T18:20:13.444808Z","end":"2024-04-08T18:20:13.900929Z","steps":["trace[302238156] 'agreement among raft nodes before linearized reading'  (duration: 455.980555ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-08T18:20:13.900959Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-08T18:20:13.444795Z","time spent":"456.158229ms","remote":"127.0.0.1:38924","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":11510,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2024-04-08T18:20:13.901195Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"509.318408ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14397"}
	{"level":"info","ts":"2024-04-08T18:20:13.902187Z","caller":"traceutil/trace.go:171","msg":"trace[1527353520] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1171; }","duration":"510.331975ms","start":"2024-04-08T18:20:13.391843Z","end":"2024-04-08T18:20:13.902175Z","steps":["trace[1527353520] 'agreement among raft nodes before linearized reading'  (duration: 509.04035ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-08T18:20:13.90237Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-08T18:20:13.391829Z","time spent":"510.47253ms","remote":"127.0.0.1:38924","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":14420,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	
	
	==> gcp-auth [7d79f5277464b3c0b653d0628fe93f0235121a5c7c2a5a54a73c77bf46c17185] <==
	2024/04/08 18:20:24 Ready to write response ...
	2024/04/08 18:20:28 Ready to marshal response ...
	2024/04/08 18:20:28 Ready to write response ...
	2024/04/08 18:20:29 Ready to marshal response ...
	2024/04/08 18:20:29 Ready to write response ...
	2024/04/08 18:20:32 Ready to marshal response ...
	2024/04/08 18:20:32 Ready to write response ...
	2024/04/08 18:20:34 Ready to marshal response ...
	2024/04/08 18:20:34 Ready to write response ...
	2024/04/08 18:20:34 Ready to marshal response ...
	2024/04/08 18:20:34 Ready to write response ...
	2024/04/08 18:20:37 Ready to marshal response ...
	2024/04/08 18:20:37 Ready to write response ...
	2024/04/08 18:20:43 Ready to marshal response ...
	2024/04/08 18:20:43 Ready to write response ...
	2024/04/08 18:20:44 Ready to marshal response ...
	2024/04/08 18:20:44 Ready to write response ...
	2024/04/08 18:20:54 Ready to marshal response ...
	2024/04/08 18:20:54 Ready to write response ...
	2024/04/08 18:20:54 Ready to marshal response ...
	2024/04/08 18:20:54 Ready to write response ...
	2024/04/08 18:20:54 Ready to marshal response ...
	2024/04/08 18:20:54 Ready to write response ...
	2024/04/08 18:21:06 Ready to marshal response ...
	2024/04/08 18:21:06 Ready to write response ...
	
	
	==> kernel <==
	 18:21:18 up 3 min,  0 users,  load average: 1.56, 1.20, 0.51
	Linux addons-647801 5.10.207 #1 SMP Mon Apr 8 14:58:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ea9620ff06d1d05c48aae97cd2060563062d38a48fc5303805da97b280102963] <==
	Trace[991614015]: ["List(recursive=true) etcd3" audit-id:35c153a0-74c2-4f50-9e19-bda44dab9478,key:/pods/ingress-nginx,resourceVersion:,resourceVersionMatch:,limit:0,continue: 513ms (18:20:13.391)]
	Trace[991614015]: [513.418899ms] [513.418899ms] END
	E0408 18:20:27.400492       1 upgradeaware.go:425] Error proxying data from client to backend: read tcp 192.168.39.113:8443->10.244.0.23:44006: read: connection reset by peer
	I0408 18:20:32.634865       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0408 18:20:32.866641       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.156.102"}
	I0408 18:20:37.363182       1 handler.go:275] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0408 18:20:38.510954       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0408 18:20:43.493693       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.101.237"}
	I0408 18:20:44.957366       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0408 18:20:45.964663       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0408 18:20:54.715059       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.157.240"}
	E0408 18:21:00.750159       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0408 18:21:16.162048       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0408 18:21:16.163157       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0408 18:21:16.189144       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0408 18:21:16.189998       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0408 18:21:16.203133       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0408 18:21:16.203687       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0408 18:21:16.235419       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0408 18:21:16.235654       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0408 18:21:16.322256       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0408 18:21:16.322326       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0408 18:21:17.204646       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0408 18:21:17.322641       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0408 18:21:17.365914       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [e0890d0ccaa9f2587853edc7f1efc2c3ef53f278595fa66adcae354a446c0e6f] <==
	I0408 18:20:54.776646       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-5b77dbd7c4" duration="28.156367ms"
	E0408 18:20:54.776968       1 replica_set.go:557] sync "headlamp/headlamp-5b77dbd7c4" failed with pods "headlamp-5b77dbd7c4-" is forbidden: error looking up service account headlamp/headlamp: serviceaccount "headlamp" not found
	I0408 18:20:54.801348       1 event.go:376] "Event occurred" object="headlamp/headlamp-5b77dbd7c4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: headlamp-5b77dbd7c4-mpdm9"
	I0408 18:20:54.830101       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-5b77dbd7c4" duration="52.950758ms"
	I0408 18:20:54.857542       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0408 18:20:54.857660       1 shared_informer.go:318] Caches are synced for resource quota
	I0408 18:20:54.866162       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-5b77dbd7c4" duration="35.731822ms"
	I0408 18:20:54.866295       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-5b77dbd7c4" duration="61.691µs"
	I0408 18:20:55.299225       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0408 18:20:55.299709       1 shared_informer.go:318] Caches are synced for garbage collector
	I0408 18:20:56.722263       1 namespace_controller.go:182] "Namespace has been deleted" namespace="ingress-nginx"
	W0408 18:20:59.322925       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0408 18:20:59.322967       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0408 18:20:59.883056       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-5b77dbd7c4" duration="62.341µs"
	I0408 18:20:59.913951       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-5b77dbd7c4" duration="11.192701ms"
	I0408 18:20:59.914259       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-5b77dbd7c4" duration="45.996µs"
	I0408 18:21:05.906893       1 event.go:376] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	W0408 18:21:13.633860       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0408 18:21:13.633968       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0408 18:21:16.409463       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="4.458µs"
	E0408 18:21:17.207184       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E0408 18:21:17.325253       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E0408 18:21:17.367893       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	W0408 18:21:18.011831       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0408 18:21:18.011869       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [c08238c3eed00412eff1d4b053a70ec85947597e1b2d10546f99da7c84f96ec1] <==
	I0408 18:18:56.895705       1 server_others.go:72] "Using iptables proxy"
	I0408 18:18:57.170175       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.113"]
	I0408 18:18:57.372941       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0408 18:18:57.372962       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0408 18:18:57.372974       1 server_others.go:168] "Using iptables Proxier"
	I0408 18:18:57.383135       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0408 18:18:57.383311       1 server.go:865] "Version info" version="v1.29.3"
	I0408 18:18:57.383330       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0408 18:18:57.384219       1 config.go:188] "Starting service config controller"
	I0408 18:18:57.384232       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0408 18:18:57.384250       1 config.go:97] "Starting endpoint slice config controller"
	I0408 18:18:57.384253       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0408 18:18:57.387884       1 config.go:315] "Starting node config controller"
	I0408 18:18:57.387894       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0408 18:18:57.485439       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0408 18:18:57.485498       1 shared_informer.go:318] Caches are synced for service config
	I0408 18:18:57.491723       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [854157b3fcf90489a3f132ca17bf56449cfc83c0415c4463bc239bfb814ff6d0] <==
	W0408 18:18:38.793086       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0408 18:18:38.793197       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0408 18:18:38.793130       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0408 18:18:38.793901       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0408 18:18:38.793348       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0408 18:18:38.794065       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0408 18:18:39.639501       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0408 18:18:39.639612       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0408 18:18:39.661235       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0408 18:18:39.661291       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0408 18:18:39.681971       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0408 18:18:39.682035       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0408 18:18:39.775369       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0408 18:18:39.775744       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0408 18:18:39.775380       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0408 18:18:39.776174       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0408 18:18:39.854718       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0408 18:18:39.854827       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0408 18:18:39.870543       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0408 18:18:39.870907       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0408 18:18:40.036849       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0408 18:18:40.037268       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0408 18:18:40.166640       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0408 18:18:40.166959       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0408 18:18:42.566327       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 08 18:21:15 addons-647801 kubelet[1246]: I0408 18:21:15.924668    1246 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9362461-f751-4f4f-a091-a333db2c33a6-config-volume\") pod \"d9362461-f751-4f4f-a091-a333db2c33a6\" (UID: \"d9362461-f751-4f4f-a091-a333db2c33a6\") "
	Apr 08 18:21:15 addons-647801 kubelet[1246]: I0408 18:21:15.925429    1246 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9362461-f751-4f4f-a091-a333db2c33a6-config-volume" (OuterVolumeSpecName: "config-volume") pod "d9362461-f751-4f4f-a091-a333db2c33a6" (UID: "d9362461-f751-4f4f-a091-a333db2c33a6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Apr 08 18:21:15 addons-647801 kubelet[1246]: I0408 18:21:15.934381    1246 scope.go:117] "RemoveContainer" containerID="b6e9a9a2a743dec61a1d98849f18c4f5a0e7ff82be76ba7a4c27134f53377ea2"
	Apr 08 18:21:15 addons-647801 kubelet[1246]: I0408 18:21:15.939684    1246 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9362461-f751-4f4f-a091-a333db2c33a6-kube-api-access-l256g" (OuterVolumeSpecName: "kube-api-access-l256g") pod "d9362461-f751-4f4f-a091-a333db2c33a6" (UID: "d9362461-f751-4f4f-a091-a333db2c33a6"). InnerVolumeSpecName "kube-api-access-l256g". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 08 18:21:15 addons-647801 kubelet[1246]: I0408 18:21:15.959053    1246 scope.go:117] "RemoveContainer" containerID="b6e9a9a2a743dec61a1d98849f18c4f5a0e7ff82be76ba7a4c27134f53377ea2"
	Apr 08 18:21:15 addons-647801 kubelet[1246]: E0408 18:21:15.962372    1246 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b6e9a9a2a743dec61a1d98849f18c4f5a0e7ff82be76ba7a4c27134f53377ea2\": not found" containerID="b6e9a9a2a743dec61a1d98849f18c4f5a0e7ff82be76ba7a4c27134f53377ea2"
	Apr 08 18:21:15 addons-647801 kubelet[1246]: I0408 18:21:15.962444    1246 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b6e9a9a2a743dec61a1d98849f18c4f5a0e7ff82be76ba7a4c27134f53377ea2"} err="failed to get container status \"b6e9a9a2a743dec61a1d98849f18c4f5a0e7ff82be76ba7a4c27134f53377ea2\": rpc error: code = NotFound desc = an error occurred when try to find container \"b6e9a9a2a743dec61a1d98849f18c4f5a0e7ff82be76ba7a4c27134f53377ea2\": not found"
	Apr 08 18:21:16 addons-647801 kubelet[1246]: I0408 18:21:16.025496    1246 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-l256g\" (UniqueName: \"kubernetes.io/projected/d9362461-f751-4f4f-a091-a333db2c33a6-kube-api-access-l256g\") on node \"addons-647801\" DevicePath \"\""
	Apr 08 18:21:16 addons-647801 kubelet[1246]: I0408 18:21:16.025626    1246 reconciler_common.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9362461-f751-4f4f-a091-a333db2c33a6-config-volume\") on node \"addons-647801\" DevicePath \"\""
	Apr 08 18:21:16 addons-647801 kubelet[1246]: I0408 18:21:16.357656    1246 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04d03b2c-7e75-4fea-9eda-90b73f4dd813" path="/var/lib/kubelet/pods/04d03b2c-7e75-4fea-9eda-90b73f4dd813/volumes"
	Apr 08 18:21:16 addons-647801 kubelet[1246]: I0408 18:21:16.358220    1246 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9362461-f751-4f4f-a091-a333db2c33a6" path="/var/lib/kubelet/pods/d9362461-f751-4f4f-a091-a333db2c33a6/volumes"
	Apr 08 18:21:16 addons-647801 kubelet[1246]: I0408 18:21:16.933027    1246 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqc57\" (UniqueName: \"kubernetes.io/projected/62df7a7a-1c16-42b0-8160-58eaa5a8d326-kube-api-access-qqc57\") pod \"62df7a7a-1c16-42b0-8160-58eaa5a8d326\" (UID: \"62df7a7a-1c16-42b0-8160-58eaa5a8d326\") "
	Apr 08 18:21:16 addons-647801 kubelet[1246]: I0408 18:21:16.933130    1246 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2m9xf\" (UniqueName: \"kubernetes.io/projected/bda4542f-11c7-409e-8e73-e914ac8d731a-kube-api-access-2m9xf\") pod \"bda4542f-11c7-409e-8e73-e914ac8d731a\" (UID: \"bda4542f-11c7-409e-8e73-e914ac8d731a\") "
	Apr 08 18:21:16 addons-647801 kubelet[1246]: I0408 18:21:16.939841    1246 scope.go:117] "RemoveContainer" containerID="aac47f897e228db0da34b80f0c53a626665f15859ef90b8e6e31c17544dd97a8"
	Apr 08 18:21:16 addons-647801 kubelet[1246]: I0408 18:21:16.943920    1246 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62df7a7a-1c16-42b0-8160-58eaa5a8d326-kube-api-access-qqc57" (OuterVolumeSpecName: "kube-api-access-qqc57") pod "62df7a7a-1c16-42b0-8160-58eaa5a8d326" (UID: "62df7a7a-1c16-42b0-8160-58eaa5a8d326"). InnerVolumeSpecName "kube-api-access-qqc57". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 08 18:21:16 addons-647801 kubelet[1246]: I0408 18:21:16.952199    1246 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bda4542f-11c7-409e-8e73-e914ac8d731a-kube-api-access-2m9xf" (OuterVolumeSpecName: "kube-api-access-2m9xf") pod "bda4542f-11c7-409e-8e73-e914ac8d731a" (UID: "bda4542f-11c7-409e-8e73-e914ac8d731a"). InnerVolumeSpecName "kube-api-access-2m9xf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 08 18:21:16 addons-647801 kubelet[1246]: I0408 18:21:16.966251    1246 scope.go:117] "RemoveContainer" containerID="aac47f897e228db0da34b80f0c53a626665f15859ef90b8e6e31c17544dd97a8"
	Apr 08 18:21:16 addons-647801 kubelet[1246]: E0408 18:21:16.969172    1246 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aac47f897e228db0da34b80f0c53a626665f15859ef90b8e6e31c17544dd97a8\": not found" containerID="aac47f897e228db0da34b80f0c53a626665f15859ef90b8e6e31c17544dd97a8"
	Apr 08 18:21:16 addons-647801 kubelet[1246]: I0408 18:21:16.969423    1246 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aac47f897e228db0da34b80f0c53a626665f15859ef90b8e6e31c17544dd97a8"} err="failed to get container status \"aac47f897e228db0da34b80f0c53a626665f15859ef90b8e6e31c17544dd97a8\": rpc error: code = NotFound desc = an error occurred when try to find container \"aac47f897e228db0da34b80f0c53a626665f15859ef90b8e6e31c17544dd97a8\": not found"
	Apr 08 18:21:16 addons-647801 kubelet[1246]: I0408 18:21:16.969515    1246 scope.go:117] "RemoveContainer" containerID="65d9d55c75aef7e7ce6cd33287b8945d755ca57decdb2fb8b63bc25590d00269"
	Apr 08 18:21:16 addons-647801 kubelet[1246]: I0408 18:21:16.984057    1246 scope.go:117] "RemoveContainer" containerID="65d9d55c75aef7e7ce6cd33287b8945d755ca57decdb2fb8b63bc25590d00269"
	Apr 08 18:21:16 addons-647801 kubelet[1246]: E0408 18:21:16.984992    1246 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"65d9d55c75aef7e7ce6cd33287b8945d755ca57decdb2fb8b63bc25590d00269\": not found" containerID="65d9d55c75aef7e7ce6cd33287b8945d755ca57decdb2fb8b63bc25590d00269"
	Apr 08 18:21:16 addons-647801 kubelet[1246]: I0408 18:21:16.985117    1246 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"65d9d55c75aef7e7ce6cd33287b8945d755ca57decdb2fb8b63bc25590d00269"} err="failed to get container status \"65d9d55c75aef7e7ce6cd33287b8945d755ca57decdb2fb8b63bc25590d00269\": rpc error: code = NotFound desc = an error occurred when try to find container \"65d9d55c75aef7e7ce6cd33287b8945d755ca57decdb2fb8b63bc25590d00269\": not found"
	Apr 08 18:21:17 addons-647801 kubelet[1246]: I0408 18:21:17.034489    1246 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-qqc57\" (UniqueName: \"kubernetes.io/projected/62df7a7a-1c16-42b0-8160-58eaa5a8d326-kube-api-access-qqc57\") on node \"addons-647801\" DevicePath \"\""
	Apr 08 18:21:17 addons-647801 kubelet[1246]: I0408 18:21:17.034827    1246 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2m9xf\" (UniqueName: \"kubernetes.io/projected/bda4542f-11c7-409e-8e73-e914ac8d731a-kube-api-access-2m9xf\") on node \"addons-647801\" DevicePath \"\""
	
	
	==> storage-provisioner [5a9344b76c925054a5240512a4b175974bcc65f09a2f7be080b62dd9e8fb9add] <==
	I0408 18:19:07.286883       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0408 18:19:07.670361       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0408 18:19:07.670426       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0408 18:19:07.716269       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0408 18:19:07.718033       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-647801_4afe64e9-9a23-47d1-bb69-75d44cd1a8de!
	I0408 18:19:07.719063       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5fd2bf39-78de-4e7e-be4b-3754dac28d3d", APIVersion:"v1", ResourceVersion:"829", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-647801_4afe64e9-9a23-47d1-bb69-75d44cd1a8de became leader
	I0408 18:19:07.819069       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-647801_4afe64e9-9a23-47d1-bb69-75d44cd1a8de!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-647801 -n addons-647801
helpers_test.go:261: (dbg) Run:  kubectl --context addons-647801 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/CSI FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/CSI (60.34s)

                                                
                                    

Test pass (293/333)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.28
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.15
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.29.3/json-events 4.16
13 TestDownloadOnly/v1.29.3/preload-exists 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.08
18 TestDownloadOnly/v1.29.3/DeleteAll 0.15
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.30.0-rc.1/json-events 4.16
22 TestDownloadOnly/v1.30.0-rc.1/preload-exists 0
26 TestDownloadOnly/v1.30.0-rc.1/LogsDuration 0.08
27 TestDownloadOnly/v1.30.0-rc.1/DeleteAll 0.15
28 TestDownloadOnly/v1.30.0-rc.1/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.58
31 TestOffline 64.79
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 142.13
38 TestAddons/parallel/Registry 13.82
39 TestAddons/parallel/Ingress 21.42
40 TestAddons/parallel/InspektorGadget 11.94
41 TestAddons/parallel/MetricsServer 6.23
42 TestAddons/parallel/HelmTiller 15.24
45 TestAddons/parallel/Headlamp 13.05
46 TestAddons/parallel/CloudSpanner 5.68
47 TestAddons/parallel/LocalPath 54.23
48 TestAddons/parallel/NvidiaDevicePlugin 5.67
49 TestAddons/parallel/Yakd 6.01
52 TestAddons/serial/GCPAuth/Namespaces 0.12
53 TestAddons/StoppedEnableDisable 92.77
54 TestCertOptions 59.22
55 TestCertExpiration 314.3
57 TestForceSystemdFlag 86.06
58 TestForceSystemdEnv 73.83
60 TestKVMDriverInstallOrUpdate 3.11
64 TestErrorSpam/setup 46.72
65 TestErrorSpam/start 0.4
66 TestErrorSpam/status 0.8
67 TestErrorSpam/pause 1.68
68 TestErrorSpam/unpause 1.75
69 TestErrorSpam/stop 4.93
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 67.22
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 21.41
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.07
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.66
81 TestFunctional/serial/CacheCmd/cache/add_local 1.67
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.9
86 TestFunctional/serial/CacheCmd/cache/delete 0.12
87 TestFunctional/serial/MinikubeKubectlCmd 0.12
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
89 TestFunctional/serial/ExtraConfig 43.72
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.55
92 TestFunctional/serial/LogsFileCmd 1.57
93 TestFunctional/serial/InvalidService 5.22
95 TestFunctional/parallel/ConfigCmd 0.4
96 TestFunctional/parallel/DashboardCmd 13.81
97 TestFunctional/parallel/DryRun 0.32
98 TestFunctional/parallel/InternationalLanguage 0.18
99 TestFunctional/parallel/StatusCmd 0.89
103 TestFunctional/parallel/ServiceCmdConnect 10.51
104 TestFunctional/parallel/AddonsCmd 0.17
105 TestFunctional/parallel/PersistentVolumeClaim 42.56
107 TestFunctional/parallel/SSHCmd 0.47
108 TestFunctional/parallel/CpCmd 1.38
109 TestFunctional/parallel/MySQL 29.85
110 TestFunctional/parallel/FileSync 0.23
111 TestFunctional/parallel/CertSync 1.37
115 TestFunctional/parallel/NodeLabels 0.09
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.54
119 TestFunctional/parallel/License 0.16
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
123 TestFunctional/parallel/Version/short 0.07
124 TestFunctional/parallel/Version/components 0.76
125 TestFunctional/parallel/MountCmd/any-port 19.88
126 TestFunctional/parallel/ServiceCmd/DeployApp 7.19
127 TestFunctional/parallel/ServiceCmd/List 0.47
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.49
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
130 TestFunctional/parallel/ServiceCmd/Format 0.33
131 TestFunctional/parallel/ServiceCmd/URL 0.35
132 TestFunctional/parallel/MountCmd/specific-port 1.98
133 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
134 TestFunctional/parallel/ProfileCmd/profile_list 0.31
135 TestFunctional/parallel/ProfileCmd/profile_json_output 0.36
136 TestFunctional/parallel/MountCmd/VerifyCleanup 1.58
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
141 TestFunctional/parallel/ImageCommands/ImageBuild 2.86
142 TestFunctional/parallel/ImageCommands/Setup 0.95
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.43
153 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.32
154 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.33
155 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.11
156 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
157 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.57
158 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.16
159 TestFunctional/delete_addon-resizer_images 0.07
160 TestFunctional/delete_my-image_image 0.02
161 TestFunctional/delete_minikube_cached_images 0.02
165 TestMultiControlPlane/serial/StartCluster 307.42
166 TestMultiControlPlane/serial/DeployApp 4.84
167 TestMultiControlPlane/serial/PingHostFromPods 1.44
168 TestMultiControlPlane/serial/AddWorkerNode 49.03
169 TestMultiControlPlane/serial/NodeLabels 0.07
170 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.55
171 TestMultiControlPlane/serial/CopyFile 13.9
172 TestMultiControlPlane/serial/StopSecondaryNode 93.16
173 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.42
174 TestMultiControlPlane/serial/RestartSecondaryNode 45.28
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.58
176 TestMultiControlPlane/serial/RestartClusterKeepsNodes 458.67
177 TestMultiControlPlane/serial/DeleteSecondaryNode 8.2
178 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.39
179 TestMultiControlPlane/serial/StopCluster 275.81
180 TestMultiControlPlane/serial/RestartCluster 155.89
181 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.39
182 TestMultiControlPlane/serial/AddSecondaryNode 75.56
183 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.56
187 TestJSONOutput/start/Command 61.48
188 TestJSONOutput/start/Audit 0
190 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/pause/Command 0.76
194 TestJSONOutput/pause/Audit 0
196 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/unpause/Command 0.68
200 TestJSONOutput/unpause/Audit 0
202 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/stop/Command 7.37
206 TestJSONOutput/stop/Audit 0
208 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
210 TestErrorJSONOutput 0.22
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 95.44
219 TestMountStart/serial/StartWithMountFirst 28.22
220 TestMountStart/serial/VerifyMountFirst 0.4
221 TestMountStart/serial/StartWithMountSecond 29.92
222 TestMountStart/serial/VerifyMountSecond 0.4
223 TestMountStart/serial/DeleteFirst 0.87
224 TestMountStart/serial/VerifyMountPostDelete 0.39
225 TestMountStart/serial/Stop 1.41
226 TestMountStart/serial/RestartStopped 22.69
227 TestMountStart/serial/VerifyMountPostStop 0.41
230 TestMultiNode/serial/FreshStart2Nodes 104.33
231 TestMultiNode/serial/DeployApp2Nodes 4.56
232 TestMultiNode/serial/PingHostFrom2Pods 0.9
233 TestMultiNode/serial/AddNode 41.55
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.23
236 TestMultiNode/serial/CopyFile 7.7
237 TestMultiNode/serial/StopNode 2.43
238 TestMultiNode/serial/StartAfterStop 26.89
239 TestMultiNode/serial/RestartKeepsNodes 296.07
240 TestMultiNode/serial/DeleteNode 2.24
241 TestMultiNode/serial/StopMultiNode 184.22
242 TestMultiNode/serial/RestartMultiNode 81
243 TestMultiNode/serial/ValidateNameConflict 47.42
248 TestPreload 229.98
250 TestScheduledStopUnix 118.99
254 TestRunningBinaryUpgrade 202.84
256 TestKubernetesUpgrade 179.18
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
263 TestNoKubernetes/serial/StartWithK8s 97.42
268 TestNetworkPlugins/group/false 3.48
272 TestNoKubernetes/serial/StartWithStopK8s 75.33
273 TestNoKubernetes/serial/Start 36.97
274 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
275 TestNoKubernetes/serial/ProfileList 1.61
276 TestNoKubernetes/serial/Stop 1.63
277 TestNoKubernetes/serial/StartNoArgs 23.79
278 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
279 TestStoppedBinaryUpgrade/Setup 0.57
280 TestStoppedBinaryUpgrade/Upgrade 144.86
289 TestPause/serial/Start 82.57
290 TestNetworkPlugins/group/auto/Start 106.24
291 TestNetworkPlugins/group/kindnet/Start 127.86
292 TestStoppedBinaryUpgrade/MinikubeLogs 1.08
293 TestNetworkPlugins/group/calico/Start 130.47
294 TestPause/serial/SecondStartNoReconfiguration 85.88
295 TestNetworkPlugins/group/auto/KubeletFlags 0.26
296 TestNetworkPlugins/group/auto/NetCatPod 11.3
297 TestNetworkPlugins/group/auto/DNS 0.19
298 TestNetworkPlugins/group/auto/Localhost 0.15
299 TestNetworkPlugins/group/auto/HairPin 0.16
300 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
301 TestNetworkPlugins/group/custom-flannel/Start 96.63
302 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
303 TestNetworkPlugins/group/kindnet/NetCatPod 10.3
304 TestNetworkPlugins/group/kindnet/DNS 0.22
305 TestNetworkPlugins/group/kindnet/Localhost 0.16
306 TestNetworkPlugins/group/kindnet/HairPin 0.17
307 TestPause/serial/Pause 0.93
308 TestPause/serial/VerifyStatus 0.82
309 TestNetworkPlugins/group/enable-default-cni/Start 70.66
310 TestPause/serial/Unpause 0.91
311 TestPause/serial/PauseAgain 1.16
312 TestPause/serial/DeletePaused 1.1
313 TestPause/serial/VerifyDeletedResources 0.34
314 TestNetworkPlugins/group/flannel/Start 116.29
315 TestNetworkPlugins/group/calico/ControllerPod 6.01
316 TestNetworkPlugins/group/calico/KubeletFlags 0.23
317 TestNetworkPlugins/group/calico/NetCatPod 11.26
318 TestNetworkPlugins/group/calico/DNS 0.23
319 TestNetworkPlugins/group/calico/Localhost 0.21
320 TestNetworkPlugins/group/calico/HairPin 1.58
321 TestNetworkPlugins/group/bridge/Start 77.86
322 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.26
323 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.3
324 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
325 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.34
326 TestNetworkPlugins/group/custom-flannel/DNS 0.18
327 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
328 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
329 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
330 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
331 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
333 TestStartStop/group/old-k8s-version/serial/FirstStart 183.79
335 TestStartStop/group/no-preload/serial/FirstStart 132.58
336 TestNetworkPlugins/group/flannel/ControllerPod 6.01
337 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
338 TestNetworkPlugins/group/flannel/NetCatPod 11.28
339 TestNetworkPlugins/group/bridge/KubeletFlags 0.34
340 TestNetworkPlugins/group/bridge/NetCatPod 11.8
341 TestNetworkPlugins/group/flannel/DNS 0.27
342 TestNetworkPlugins/group/flannel/Localhost 0.2
343 TestNetworkPlugins/group/flannel/HairPin 0.19
344 TestNetworkPlugins/group/bridge/DNS 0.19
345 TestNetworkPlugins/group/bridge/Localhost 0.17
346 TestNetworkPlugins/group/bridge/HairPin 0.2
348 TestStartStop/group/embed-certs/serial/FirstStart 76.49
350 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 89.82
351 TestStartStop/group/embed-certs/serial/DeployApp 8.37
352 TestStartStop/group/no-preload/serial/DeployApp 9.39
353 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.27
354 TestStartStop/group/embed-certs/serial/Stop 92.59
355 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.34
356 TestStartStop/group/no-preload/serial/Stop 92.56
357 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.3
358 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.12
359 TestStartStop/group/default-k8s-diff-port/serial/Stop 92.53
360 TestStartStop/group/old-k8s-version/serial/DeployApp 8.45
361 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.08
362 TestStartStop/group/old-k8s-version/serial/Stop 92.49
363 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
364 TestStartStop/group/embed-certs/serial/SecondStart 324.58
365 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
366 TestStartStop/group/no-preload/serial/SecondStart 338.23
367 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.26
368 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 341.86
369 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.31
370 TestStartStop/group/old-k8s-version/serial/SecondStart 207.26
371 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
372 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
373 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
374 TestStartStop/group/old-k8s-version/serial/Pause 3
376 TestStartStop/group/newest-cni/serial/FirstStart 65.94
377 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 16.01
378 TestStartStop/group/newest-cni/serial/DeployApp 0
379 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.42
380 TestStartStop/group/newest-cni/serial/Stop 7.43
381 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 13.01
382 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
383 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.37
384 TestStartStop/group/newest-cni/serial/SecondStart 43.1
385 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.29
386 TestStartStop/group/embed-certs/serial/Pause 3.74
387 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
388 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
389 TestStartStop/group/no-preload/serial/Pause 3.45
390 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 13.01
391 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
392 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
393 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.14
394 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
395 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
396 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
397 TestStartStop/group/newest-cni/serial/Pause 2.68
x
+
TestDownloadOnly/v1.20.0/json-events (7.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-801401 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-801401 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (7.281541043s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-801401
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-801401: exit status 85 (76.845837ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-801401 | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:17 UTC |          |
	|         | -p download-only-801401        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |          |
	|         | --container-runtime=containerd |                      |         |                |                     |          |
	|         | --driver=kvm2                  |                      |         |                |                     |          |
	|         | --container-runtime=containerd |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 18:17:38
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 18:17:38.690409  618248 out.go:291] Setting OutFile to fd 1 ...
	I0408 18:17:38.690543  618248 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 18:17:38.690552  618248 out.go:304] Setting ErrFile to fd 2...
	I0408 18:17:38.690557  618248 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 18:17:38.690731  618248 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18585-610499/.minikube/bin
	W0408 18:17:38.690868  618248 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18585-610499/.minikube/config/config.json: open /home/jenkins/minikube-integration/18585-610499/.minikube/config/config.json: no such file or directory
	I0408 18:17:38.691434  618248 out.go:298] Setting JSON to true
	I0408 18:17:38.692408  618248 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7210,"bootTime":1712593049,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 18:17:38.692529  618248 start.go:139] virtualization: kvm guest
	I0408 18:17:38.695237  618248 out.go:97] [download-only-801401] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 18:17:38.697006  618248 out.go:169] MINIKUBE_LOCATION=18585
	I0408 18:17:38.695407  618248 notify.go:220] Checking for updates...
	W0408 18:17:38.695454  618248 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18585-610499/.minikube/cache/preloaded-tarball: no such file or directory
	I0408 18:17:38.699961  618248 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 18:17:38.701692  618248 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18585-610499/kubeconfig
	I0408 18:17:38.703271  618248 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18585-610499/.minikube
	I0408 18:17:38.704818  618248 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0408 18:17:38.707741  618248 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0408 18:17:38.708001  618248 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 18:17:38.743712  618248 out.go:97] Using the kvm2 driver based on user configuration
	I0408 18:17:38.743743  618248 start.go:297] selected driver: kvm2
	I0408 18:17:38.743752  618248 start.go:901] validating driver "kvm2" against <nil>
	I0408 18:17:38.744227  618248 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 18:17:38.744336  618248 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18585-610499/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 18:17:38.759785  618248 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0408 18:17:38.759841  618248 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 18:17:38.760351  618248 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0408 18:17:38.760503  618248 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0408 18:17:38.760561  618248 cni.go:84] Creating CNI manager for ""
	I0408 18:17:38.760574  618248 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0408 18:17:38.760584  618248 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 18:17:38.760636  618248 start.go:340] cluster config:
	{Name:download-only-801401 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-801401 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 18:17:38.760809  618248 iso.go:125] acquiring lock: {Name:mk6be88515b11e528d76386559642c5a6b85b7f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 18:17:38.763095  618248 out.go:97] Downloading VM boot image ...
	I0408 18:17:38.763156  618248 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18585-610499/.minikube/cache/iso/amd64/minikube-v1.33.0-1712570768-18585-amd64.iso
	I0408 18:17:41.412079  618248 out.go:97] Starting "download-only-801401" primary control-plane node in "download-only-801401" cluster
	I0408 18:17:41.412123  618248 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0408 18:17:41.443425  618248 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0408 18:17:41.443480  618248 cache.go:56] Caching tarball of preloaded images
	I0408 18:17:41.443694  618248 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0408 18:17:41.445690  618248 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0408 18:17:41.445720  618248 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0408 18:17:41.474243  618248 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/18585-610499/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-801401 host does not exist
	  To start a cluster, run: "minikube start -p download-only-801401"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-801401
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (4.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-114584 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-114584 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (4.161332453s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (4.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-114584
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-114584: exit status 85 (79.069249ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-801401 | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:17 UTC |                     |
	|         | -p download-only-801401        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	|         | --driver=kvm2                  |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:17 UTC | 08 Apr 24 18:17 UTC |
	| delete  | -p download-only-801401        | download-only-801401 | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:17 UTC | 08 Apr 24 18:17 UTC |
	| start   | -o=json --download-only        | download-only-114584 | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:17 UTC |                     |
	|         | -p download-only-114584        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	|         | --driver=kvm2                  |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 18:17:46
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 18:17:46.331450  618432 out.go:291] Setting OutFile to fd 1 ...
	I0408 18:17:46.331731  618432 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 18:17:46.331742  618432 out.go:304] Setting ErrFile to fd 2...
	I0408 18:17:46.331746  618432 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 18:17:46.331927  618432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18585-610499/.minikube/bin
	I0408 18:17:46.332546  618432 out.go:298] Setting JSON to true
	I0408 18:17:46.333460  618432 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7217,"bootTime":1712593049,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 18:17:46.333527  618432 start.go:139] virtualization: kvm guest
	I0408 18:17:46.336077  618432 out.go:97] [download-only-114584] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 18:17:46.337911  618432 out.go:169] MINIKUBE_LOCATION=18585
	I0408 18:17:46.336298  618432 notify.go:220] Checking for updates...
	I0408 18:17:46.340978  618432 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 18:17:46.342399  618432 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18585-610499/kubeconfig
	I0408 18:17:46.343874  618432 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18585-610499/.minikube
	I0408 18:17:46.345307  618432 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-114584 host does not exist
	  To start a cluster, run: "minikube start -p download-only-114584"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-114584
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.1/json-events (4.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-749213 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-749213 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (4.157961871s)
--- PASS: TestDownloadOnly/v1.30.0-rc.1/json-events (4.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.1/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-749213
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-749213: exit status 85 (79.541782ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-801401 | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:17 UTC |                     |
	|         | -p download-only-801401           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |                |                     |                     |
	|         | --container-runtime=containerd    |                      |         |                |                     |                     |
	|         | --driver=kvm2                     |                      |         |                |                     |                     |
	|         | --container-runtime=containerd    |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:17 UTC | 08 Apr 24 18:17 UTC |
	| delete  | -p download-only-801401           | download-only-801401 | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:17 UTC | 08 Apr 24 18:17 UTC |
	| start   | -o=json --download-only           | download-only-114584 | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:17 UTC |                     |
	|         | -p download-only-114584           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3      |                      |         |                |                     |                     |
	|         | --container-runtime=containerd    |                      |         |                |                     |                     |
	|         | --driver=kvm2                     |                      |         |                |                     |                     |
	|         | --container-runtime=containerd    |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:17 UTC | 08 Apr 24 18:17 UTC |
	| delete  | -p download-only-114584           | download-only-114584 | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:17 UTC | 08 Apr 24 18:17 UTC |
	| start   | -o=json --download-only           | download-only-749213 | jenkins | v1.33.0-beta.0 | 08 Apr 24 18:17 UTC |                     |
	|         | -p download-only-749213           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.1 |                      |         |                |                     |                     |
	|         | --container-runtime=containerd    |                      |         |                |                     |                     |
	|         | --driver=kvm2                     |                      |         |                |                     |                     |
	|         | --container-runtime=containerd    |                      |         |                |                     |                     |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 18:17:50
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 18:17:50.852235  618576 out.go:291] Setting OutFile to fd 1 ...
	I0408 18:17:50.852506  618576 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 18:17:50.852518  618576 out.go:304] Setting ErrFile to fd 2...
	I0408 18:17:50.852525  618576 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 18:17:50.852744  618576 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18585-610499/.minikube/bin
	I0408 18:17:50.853346  618576 out.go:298] Setting JSON to true
	I0408 18:17:50.854357  618576 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7222,"bootTime":1712593049,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 18:17:50.854431  618576 start.go:139] virtualization: kvm guest
	I0408 18:17:50.856847  618576 out.go:97] [download-only-749213] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 18:17:50.858628  618576 out.go:169] MINIKUBE_LOCATION=18585
	I0408 18:17:50.856995  618576 notify.go:220] Checking for updates...
	I0408 18:17:50.861888  618576 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 18:17:50.863353  618576 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18585-610499/kubeconfig
	I0408 18:17:50.864870  618576 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18585-610499/.minikube
	I0408 18:17:50.866326  618576 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-749213 host does not exist
	  To start a cluster, run: "minikube start -p download-only-749213"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-rc.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.1/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0-rc.1/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-749213
--- PASS: TestDownloadOnly/v1.30.0-rc.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-805915 --alsologtostderr --binary-mirror http://127.0.0.1:45805 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-805915" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-805915
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (64.79s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-979064 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-979064 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m3.94076396s)
helpers_test.go:175: Cleaning up "offline-containerd-979064" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-979064
--- PASS: TestOffline (64.79s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-647801
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-647801: exit status 85 (68.200703ms)

                                                
                                                
-- stdout --
	* Profile "addons-647801" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-647801"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-647801
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-647801: exit status 85 (67.495647ms)

                                                
                                                
-- stdout --
	* Profile "addons-647801" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-647801"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (142.13s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-647801 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-647801 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m22.133324826s)
--- PASS: TestAddons/Setup (142.13s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 28.682044ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-r44nw" [2fa023e4-daf2-4594-bbd7-4251b3206eab] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.006892975s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-k57pz" [1f84e77c-c70b-4a43-b5a5-f5e7dce1277a] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004498892s
addons_test.go:340: (dbg) Run:  kubectl --context addons-647801 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-647801 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-647801 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.915235172s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-647801 ip
2024/04/08 18:20:31 [DEBUG] GET http://192.168.39.113:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-647801 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.82s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-647801 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-647801 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-647801 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [1ee0c033-e232-4690-bfbe-c4c3bc473e3a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [1ee0c033-e232-4690-bfbe-c4c3bc473e3a] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004068867s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-647801 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-647801 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-647801 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.113
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-647801 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-647801 addons disable ingress-dns --alsologtostderr -v=1: (2.125512542s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-647801 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-647801 addons disable ingress --alsologtostderr -v=1: (7.936164878s)
--- PASS: TestAddons/parallel/Ingress (21.42s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.94s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-4h4hj" [c42aa74f-d15d-4c03-a9fa-e16ac3a7e361] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005960441s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-647801
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-647801: (5.934842559s)
--- PASS: TestAddons/parallel/InspektorGadget (11.94s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.23s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 3.490224ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-75d6c48ddd-r928h" [c4c3cd08-0a1e-4a92-b4a2-29ef016b8992] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.01155831s
addons_test.go:415: (dbg) Run:  kubectl --context addons-647801 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-647801 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-linux-amd64 -p addons-647801 addons disable metrics-server --alsologtostderr -v=1: (1.139918646s)
--- PASS: TestAddons/parallel/MetricsServer (6.23s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (15.24s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 29.044677ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-vmzrg" [a4205dc8-51a8-4c10-89dc-fbd5a9fa2346] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.006364187s
addons_test.go:473: (dbg) Run:  kubectl --context addons-647801 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-647801 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.959663821s)
addons_test.go:478: kubectl --context addons-647801 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:473: (dbg) Run:  kubectl --context addons-647801 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-647801 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.062176564s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-647801 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (15.24s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-647801 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-647801 --alsologtostderr -v=1: (1.036054681s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5b77dbd7c4-mpdm9" [c1c44c0d-4aa5-492b-b641-b8a677295492] Pending
helpers_test.go:344: "headlamp-5b77dbd7c4-mpdm9" [c1c44c0d-4aa5-492b-b641-b8a677295492] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5b77dbd7c4-mpdm9" [c1c44c0d-4aa5-492b-b641-b8a677295492] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.009686115s
--- PASS: TestAddons/parallel/Headlamp (13.05s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.68s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5446596998-z54m8" [f522f875-47b8-4ef2-87f6-b9fcc31be79f] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004457988s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-647801
--- PASS: TestAddons/parallel/CloudSpanner (5.68s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.23s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-647801 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-647801 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647801 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [28683697-5ba4-4827-8ba8-ecc026b4600c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [28683697-5ba4-4827-8ba8-ecc026b4600c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [28683697-5ba4-4827-8ba8-ecc026b4600c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.005254491s
addons_test.go:891: (dbg) Run:  kubectl --context addons-647801 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-647801 ssh "cat /opt/local-path-provisioner/pvc-98f67a28-2944-4b09-a5b8-08ff2d55447a_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-647801 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-647801 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-647801 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-647801 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.334936585s)
--- PASS: TestAddons/parallel/LocalPath (54.23s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.67s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-nf6ws" [0870220b-2774-4360-844d-84b35aa706a1] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005681143s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-647801
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.67s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-6bth7" [ba4743dd-e0b6-4d75-9665-de0ac55d2aa1] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004749632s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-647801 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-647801 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.77s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-647801
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-647801: (1m32.454675358s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-647801
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-647801
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-647801
--- PASS: TestAddons/StoppedEnableDisable (92.77s)

                                                
                                    
x
+
TestCertOptions (59.22s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-721355 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-721355 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (57.835792938s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-721355 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-721355 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-721355 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-721355" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-721355
--- PASS: TestCertOptions (59.22s)

                                                
                                    
x
+
TestCertExpiration (314.3s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-354606 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
E0408 19:16:43.948409  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/functional-819351/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-354606 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m50.718556693s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-354606 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-354606 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (22.444862378s)
helpers_test.go:175: Cleaning up "cert-expiration-354606" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-354606
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-354606: (1.13781524s)
--- PASS: TestCertExpiration (314.30s)

                                                
                                    
x
+
TestForceSystemdFlag (86.06s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-747540 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-747540 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m25.02994165s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-747540 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-747540" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-747540
--- PASS: TestForceSystemdFlag (86.06s)

                                                
                                    
x
+
TestForceSystemdEnv (73.83s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-038234 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-038234 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m12.607112785s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-038234 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-038234" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-038234
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-038234: (1.013204481s)
--- PASS: TestForceSystemdEnv (73.83s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.11s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.11s)

                                                
                                    
x
+
TestErrorSpam/setup (46.72s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-026401 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-026401 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-026401 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-026401 --driver=kvm2  --container-runtime=containerd: (46.717318763s)
--- PASS: TestErrorSpam/setup (46.72s)

                                                
                                    
x
+
TestErrorSpam/start (0.4s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026401 --log_dir /tmp/nospam-026401 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026401 --log_dir /tmp/nospam-026401 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026401 --log_dir /tmp/nospam-026401 start --dry-run
--- PASS: TestErrorSpam/start (0.40s)

                                                
                                    
x
+
TestErrorSpam/status (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026401 --log_dir /tmp/nospam-026401 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026401 --log_dir /tmp/nospam-026401 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026401 --log_dir /tmp/nospam-026401 status
--- PASS: TestErrorSpam/status (0.80s)

                                                
                                    
x
+
TestErrorSpam/pause (1.68s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026401 --log_dir /tmp/nospam-026401 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026401 --log_dir /tmp/nospam-026401 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026401 --log_dir /tmp/nospam-026401 pause
--- PASS: TestErrorSpam/pause (1.68s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.75s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026401 --log_dir /tmp/nospam-026401 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026401 --log_dir /tmp/nospam-026401 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026401 --log_dir /tmp/nospam-026401 unpause
--- PASS: TestErrorSpam/unpause (1.75s)

                                                
                                    
x
+
TestErrorSpam/stop (4.93s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026401 --log_dir /tmp/nospam-026401 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-026401 --log_dir /tmp/nospam-026401 stop: (2.315435232s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026401 --log_dir /tmp/nospam-026401 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-026401 --log_dir /tmp/nospam-026401 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-026401 --log_dir /tmp/nospam-026401 stop: (1.628179113s)
--- PASS: TestErrorSpam/stop (4.93s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18585-610499/.minikube/files/etc/test/nested/copy/618237/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (67.22s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-819351 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E0408 18:25:18.529754  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.crt: no such file or directory
E0408 18:25:18.536023  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.crt: no such file or directory
E0408 18:25:18.546392  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.crt: no such file or directory
E0408 18:25:18.567019  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.crt: no such file or directory
E0408 18:25:18.607649  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.crt: no such file or directory
E0408 18:25:18.688042  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.crt: no such file or directory
E0408 18:25:18.848618  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.crt: no such file or directory
E0408 18:25:19.169260  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.crt: no such file or directory
E0408 18:25:19.809860  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.crt: no such file or directory
E0408 18:25:21.091063  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-819351 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m7.217552294s)
--- PASS: TestFunctional/serial/StartWithProxy (67.22s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (21.41s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-819351 --alsologtostderr -v=8
E0408 18:25:23.652239  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.crt: no such file or directory
E0408 18:25:28.772748  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.crt: no such file or directory
E0408 18:25:39.013208  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-819351 --alsologtostderr -v=8: (21.410406613s)
functional_test.go:659: soft start took 21.411042696s for "functional-819351" cluster.
--- PASS: TestFunctional/serial/SoftStart (21.41s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-819351 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-819351 cache add registry.k8s.io/pause:3.1: (1.172951795s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-819351 cache add registry.k8s.io/pause:3.3: (1.268185952s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-819351 cache add registry.k8s.io/pause:latest: (1.217696465s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-819351 /tmp/TestFunctionalserialCacheCmdcacheadd_local2806997227/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 cache add minikube-local-cache-test:functional-819351
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-819351 cache add minikube-local-cache-test:functional-819351: (1.278652796s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 cache delete minikube-local-cache-test:functional-819351
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-819351
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.9s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-819351 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (230.699251ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-819351 cache reload: (1.196695077s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.90s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 kubectl -- --context functional-819351 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-819351 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.72s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-819351 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0408 18:25:59.493559  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-819351 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.718158882s)
functional_test.go:757: restart took 43.718314248s for "functional-819351" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (43.72s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-819351 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-819351 logs: (1.548164928s)
--- PASS: TestFunctional/serial/LogsCmd (1.55s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 logs --file /tmp/TestFunctionalserialLogsFileCmd1706971103/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-819351 logs --file /tmp/TestFunctionalserialLogsFileCmd1706971103/001/logs.txt: (1.568117607s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.57s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.22s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-819351 apply -f testdata/invalidsvc.yaml
E0408 18:26:40.454553  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.crt: no such file or directory
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-819351
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-819351: exit status 115 (306.180157ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.116:30283 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-819351 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-819351 delete -f testdata/invalidsvc.yaml: (1.700955301s)
--- PASS: TestFunctional/serial/InvalidService (5.22s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-819351 config get cpus: exit status 14 (64.13928ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-819351 config get cpus: exit status 14 (65.045846ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-819351 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-819351 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 626515: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.81s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-819351 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-819351 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (162.860144ms)

                                                
                                                
-- stdout --
	* [functional-819351] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18585-610499/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18585-610499/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 18:27:07.630618  626138 out.go:291] Setting OutFile to fd 1 ...
	I0408 18:27:07.630755  626138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 18:27:07.630765  626138 out.go:304] Setting ErrFile to fd 2...
	I0408 18:27:07.630769  626138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 18:27:07.630991  626138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18585-610499/.minikube/bin
	I0408 18:27:07.631575  626138 out.go:298] Setting JSON to false
	I0408 18:27:07.632632  626138 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7779,"bootTime":1712593049,"procs":247,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 18:27:07.632711  626138 start.go:139] virtualization: kvm guest
	I0408 18:27:07.636159  626138 out.go:177] * [functional-819351] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 18:27:07.637545  626138 notify.go:220] Checking for updates...
	I0408 18:27:07.637556  626138 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 18:27:07.638931  626138 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 18:27:07.640474  626138 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18585-610499/kubeconfig
	I0408 18:27:07.641902  626138 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18585-610499/.minikube
	I0408 18:27:07.643396  626138 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 18:27:07.644845  626138 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 18:27:07.647255  626138 config.go:182] Loaded profile config "functional-819351": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 18:27:07.647907  626138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:27:07.647996  626138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:27:07.664096  626138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43893
	I0408 18:27:07.664555  626138 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:27:07.665113  626138 main.go:141] libmachine: Using API Version  1
	I0408 18:27:07.665140  626138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:27:07.665571  626138 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:27:07.665760  626138 main.go:141] libmachine: (functional-819351) Calling .DriverName
	I0408 18:27:07.666026  626138 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 18:27:07.666310  626138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:27:07.666349  626138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:27:07.682684  626138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40705
	I0408 18:27:07.683153  626138 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:27:07.683707  626138 main.go:141] libmachine: Using API Version  1
	I0408 18:27:07.683728  626138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:27:07.684066  626138 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:27:07.684282  626138 main.go:141] libmachine: (functional-819351) Calling .DriverName
	I0408 18:27:07.718080  626138 out.go:177] * Using the kvm2 driver based on existing profile
	I0408 18:27:07.719455  626138 start.go:297] selected driver: kvm2
	I0408 18:27:07.719475  626138 start.go:901] validating driver "kvm2" against &{Name:functional-819351 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:functional-819351 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 18:27:07.719630  626138 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 18:27:07.722106  626138 out.go:177] 
	W0408 18:27:07.723423  626138 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0408 18:27:07.724937  626138 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-819351 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-819351 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-819351 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (178.874865ms)

                                                
                                                
-- stdout --
	* [functional-819351] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18585-610499/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18585-610499/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 18:27:06.569197  625844 out.go:291] Setting OutFile to fd 1 ...
	I0408 18:27:06.569327  625844 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 18:27:06.569337  625844 out.go:304] Setting ErrFile to fd 2...
	I0408 18:27:06.569342  625844 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 18:27:06.569630  625844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18585-610499/.minikube/bin
	I0408 18:27:06.570365  625844 out.go:298] Setting JSON to false
	I0408 18:27:06.571455  625844 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7778,"bootTime":1712593049,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 18:27:06.571520  625844 start.go:139] virtualization: kvm guest
	I0408 18:27:06.573933  625844 out.go:177] * [functional-819351] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	I0408 18:27:06.575346  625844 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 18:27:06.575396  625844 notify.go:220] Checking for updates...
	I0408 18:27:06.576769  625844 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 18:27:06.577976  625844 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18585-610499/kubeconfig
	I0408 18:27:06.579749  625844 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18585-610499/.minikube
	I0408 18:27:06.583683  625844 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 18:27:06.585041  625844 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 18:27:06.586749  625844 config.go:182] Loaded profile config "functional-819351": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 18:27:06.587462  625844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:27:06.587517  625844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:27:06.607445  625844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38045
	I0408 18:27:06.608144  625844 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:27:06.608804  625844 main.go:141] libmachine: Using API Version  1
	I0408 18:27:06.608835  625844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:27:06.609193  625844 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:27:06.609417  625844 main.go:141] libmachine: (functional-819351) Calling .DriverName
	I0408 18:27:06.609797  625844 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 18:27:06.610201  625844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:27:06.610239  625844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:27:06.627444  625844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35343
	I0408 18:27:06.627962  625844 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:27:06.628550  625844 main.go:141] libmachine: Using API Version  1
	I0408 18:27:06.628586  625844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:27:06.628923  625844 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:27:06.629106  625844 main.go:141] libmachine: (functional-819351) Calling .DriverName
	I0408 18:27:06.669776  625844 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0408 18:27:06.670978  625844 start.go:297] selected driver: kvm2
	I0408 18:27:06.670995  625844 start.go:901] validating driver "kvm2" against &{Name:functional-819351 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18585/minikube-v1.33.0-1712570768-18585-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712593525-18585@sha256:82295aae32f93620eb23c604c6fbfbc087f5827d39119a722f4d08f3622b1dfd Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:functional-819351 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 18:27:06.671114  625844 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 18:27:06.673273  625844 out.go:177] 
	W0408 18:27:06.674584  625844 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0408 18:27:06.676010  625844 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-819351 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-819351 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-7mz9j" [7364de83-b1a4-4bb9-993f-6e2ceaac0fe6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-7mz9j" [7364de83-b1a4-4bb9-993f-6e2ceaac0fe6] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004444914s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.116:32214
functional_test.go:1671: http://192.168.39.116:32214: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-7mz9j

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.116:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.116:32214
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.51s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (42.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [48e1eb35-7560-4d95-908e-e20a8508fb7b] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004109298s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-819351 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-819351 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-819351 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-819351 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f253c5d2-9005-4a87-a35d-3e4e0bd54650] Pending
helpers_test.go:344: "sp-pod" [f253c5d2-9005-4a87-a35d-3e4e0bd54650] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f253c5d2-9005-4a87-a35d-3e4e0bd54650] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.005050492s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-819351 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-819351 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-819351 delete -f testdata/storage-provisioner/pod.yaml: (1.676693555s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-819351 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3ef02282-4728-4a22-bd1b-bd3c486747eb] Pending
helpers_test.go:344: "sp-pod" [3ef02282-4728-4a22-bd1b-bd3c486747eb] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3ef02282-4728-4a22-bd1b-bd3c486747eb] Running
2024/04/08 18:27:21 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.005132923s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-819351 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (42.56s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 ssh -n functional-819351 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 cp functional-819351:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd794730907/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 ssh -n functional-819351 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 ssh -n functional-819351 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (29.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-819351 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-q9srh" [515e29b2-0561-42bb-b1e5-c0d27152a34f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-q9srh" [515e29b2-0561-42bb-b1e5-c0d27152a34f] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.00545624s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-819351 exec mysql-859648c796-q9srh -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-819351 exec mysql-859648c796-q9srh -- mysql -ppassword -e "show databases;": exit status 1 (198.945659ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-819351 exec mysql-859648c796-q9srh -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-819351 exec mysql-859648c796-q9srh -- mysql -ppassword -e "show databases;": exit status 1 (222.304785ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-819351 exec mysql-859648c796-q9srh -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-819351 exec mysql-859648c796-q9srh -- mysql -ppassword -e "show databases;": exit status 1 (223.760536ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-819351 exec mysql-859648c796-q9srh -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-819351 exec mysql-859648c796-q9srh -- mysql -ppassword -e "show databases;": exit status 1 (192.674033ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-819351 exec mysql-859648c796-q9srh -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (29.85s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/618237/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 ssh "sudo cat /etc/test/nested/copy/618237/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/618237.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 ssh "sudo cat /etc/ssl/certs/618237.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/618237.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 ssh "sudo cat /usr/share/ca-certificates/618237.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/6182372.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 ssh "sudo cat /etc/ssl/certs/6182372.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/6182372.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 ssh "sudo cat /usr/share/ca-certificates/6182372.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-819351 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-819351 ssh "sudo systemctl is-active docker": exit status 1 (218.620337ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-819351 ssh "sudo systemctl is-active crio": exit status 1 (318.270689ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (19.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-819351 /tmp/TestFunctionalparallelMountCmdany-port1805492896/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1712600805557244864" to /tmp/TestFunctionalparallelMountCmdany-port1805492896/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1712600805557244864" to /tmp/TestFunctionalparallelMountCmdany-port1805492896/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1712600805557244864" to /tmp/TestFunctionalparallelMountCmdany-port1805492896/001/test-1712600805557244864
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-819351 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (230.057578ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr  8 18:26 created-by-test
-rw-r--r-- 1 docker docker 24 Apr  8 18:26 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr  8 18:26 test-1712600805557244864
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 ssh cat /mount-9p/test-1712600805557244864
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-819351 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [736fc61f-dd66-4b47-a3cb-892ec49e6238] Pending
helpers_test.go:344: "busybox-mount" [736fc61f-dd66-4b47-a3cb-892ec49e6238] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [736fc61f-dd66-4b47-a3cb-892ec49e6238] Running
helpers_test.go:344: "busybox-mount" [736fc61f-dd66-4b47-a3cb-892ec49e6238] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [736fc61f-dd66-4b47-a3cb-892ec49e6238] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 17.006932622s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-819351 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-819351 /tmp/TestFunctionalparallelMountCmdany-port1805492896/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (19.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-819351 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-819351 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-2gtnm" [30fa4204-1003-4d70-b2fc-d0928108e0de] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-2gtnm" [30fa4204-1003-4d70-b2fc-d0928108e0de] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.007859s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 service list -o json
functional_test.go:1490: Took "494.816182ms" to run "out/minikube-linux-amd64 -p functional-819351 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.116:32340
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.116:32340
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-819351 /tmp/TestFunctionalparallelMountCmdspecific-port324065165/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-819351 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (297.233531ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-819351 /tmp/TestFunctionalparallelMountCmdspecific-port324065165/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-819351 ssh "sudo umount -f /mount-9p": exit status 1 (216.766413ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-819351 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-819351 /tmp/TestFunctionalparallelMountCmdspecific-port324065165/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "252.299351ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "60.030314ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "302.633ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "60.70731ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-819351 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1899235226/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-819351 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1899235226/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-819351 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1899235226/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-819351 ssh "findmnt -T" /mount1: exit status 1 (321.426227ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-819351 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-819351 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1899235226/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-819351 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1899235226/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-819351 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1899235226/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-819351 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.29.3
registry.k8s.io/kube-proxy:v1.29.3
registry.k8s.io/kube-controller-manager:v1.29.3
registry.k8s.io/kube-apiserver:v1.29.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-819351
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-819351
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-819351 image ls --format short --alsologtostderr:
I0408 18:27:28.922746  627095 out.go:291] Setting OutFile to fd 1 ...
I0408 18:27:28.923090  627095 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 18:27:28.923107  627095 out.go:304] Setting ErrFile to fd 2...
I0408 18:27:28.923114  627095 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 18:27:28.923297  627095 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18585-610499/.minikube/bin
I0408 18:27:28.923884  627095 config.go:182] Loaded profile config "functional-819351": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0408 18:27:28.923998  627095 config.go:182] Loaded profile config "functional-819351": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0408 18:27:28.924337  627095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0408 18:27:28.924396  627095 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 18:27:28.939309  627095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46665
I0408 18:27:28.939824  627095 main.go:141] libmachine: () Calling .GetVersion
I0408 18:27:28.940427  627095 main.go:141] libmachine: Using API Version  1
I0408 18:27:28.940451  627095 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 18:27:28.940897  627095 main.go:141] libmachine: () Calling .GetMachineName
I0408 18:27:28.941155  627095 main.go:141] libmachine: (functional-819351) Calling .GetState
I0408 18:27:28.943251  627095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0408 18:27:28.943295  627095 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 18:27:28.959978  627095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43105
I0408 18:27:28.960468  627095 main.go:141] libmachine: () Calling .GetVersion
I0408 18:27:28.960935  627095 main.go:141] libmachine: Using API Version  1
I0408 18:27:28.960951  627095 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 18:27:28.961342  627095 main.go:141] libmachine: () Calling .GetMachineName
I0408 18:27:28.961607  627095 main.go:141] libmachine: (functional-819351) Calling .DriverName
I0408 18:27:28.961837  627095 ssh_runner.go:195] Run: systemctl --version
I0408 18:27:28.961862  627095 main.go:141] libmachine: (functional-819351) Calling .GetSSHHostname
I0408 18:27:28.965168  627095 main.go:141] libmachine: (functional-819351) DBG | domain functional-819351 has defined MAC address 52:54:00:9e:57:25 in network mk-functional-819351
I0408 18:27:28.965873  627095 main.go:141] libmachine: (functional-819351) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:57:25", ip: ""} in network mk-functional-819351: {Iface:virbr1 ExpiryTime:2024-04-08 19:24:31 +0000 UTC Type:0 Mac:52:54:00:9e:57:25 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:functional-819351 Clientid:01:52:54:00:9e:57:25}
I0408 18:27:28.965960  627095 main.go:141] libmachine: (functional-819351) DBG | domain functional-819351 has defined IP address 192.168.39.116 and MAC address 52:54:00:9e:57:25 in network mk-functional-819351
I0408 18:27:28.966320  627095 main.go:141] libmachine: (functional-819351) Calling .GetSSHPort
I0408 18:27:28.966493  627095 main.go:141] libmachine: (functional-819351) Calling .GetSSHKeyPath
I0408 18:27:28.966704  627095 main.go:141] libmachine: (functional-819351) Calling .GetSSHUsername
I0408 18:27:28.966846  627095 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18585-610499/.minikube/machines/functional-819351/id_rsa Username:docker}
I0408 18:27:29.055172  627095 ssh_runner.go:195] Run: sudo crictl images --output json
I0408 18:27:29.123783  627095 main.go:141] libmachine: Making call to close driver server
I0408 18:27:29.123800  627095 main.go:141] libmachine: (functional-819351) Calling .Close
I0408 18:27:29.124092  627095 main.go:141] libmachine: Successfully made call to close driver server
I0408 18:27:29.124111  627095 main.go:141] libmachine: Making call to close connection to plugin binary
I0408 18:27:29.124114  627095 main.go:141] libmachine: (functional-819351) DBG | Closing plugin on server side
I0408 18:27:29.124125  627095 main.go:141] libmachine: Making call to close driver server
I0408 18:27:29.124135  627095 main.go:141] libmachine: (functional-819351) Calling .Close
I0408 18:27:29.124409  627095 main.go:141] libmachine: (functional-819351) DBG | Closing plugin on server side
I0408 18:27:29.124505  627095 main.go:141] libmachine: Successfully made call to close driver server
I0408 18:27:29.124525  627095 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-819351 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-apiserver              | v1.29.3            | sha256:39f995 | 35.1MB |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| docker.io/kindest/kindnetd                  | v20240202-8f1494ea | sha256:4950bb | 27.8MB |
| docker.io/library/nginx                     | latest             | sha256:92b11f | 70.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:cbb01a | 18.2MB |
| registry.k8s.io/etcd                        | 3.5.12-0           | sha256:3861cf | 57.2MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| docker.io/library/mysql                     | 5.7                | sha256:510733 | 138MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/kube-proxy                  | v1.29.3            | sha256:a1d263 | 28.4MB |
| docker.io/library/minikube-local-cache-test | functional-819351  | sha256:e9f7b4 | 990B   |
| gcr.io/google-containers/addon-resizer      | functional-819351  | sha256:ffd4cf | 10.8MB |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/kube-controller-manager     | v1.29.3            | sha256:6052a2 | 33.5MB |
| registry.k8s.io/kube-scheduler              | v1.29.3            | sha256:8c390d | 18.6MB |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-819351 image ls --format table --alsologtostderr:
I0408 18:27:29.203866  627201 out.go:291] Setting OutFile to fd 1 ...
I0408 18:27:29.204146  627201 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 18:27:29.204157  627201 out.go:304] Setting ErrFile to fd 2...
I0408 18:27:29.204161  627201 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 18:27:29.204366  627201 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18585-610499/.minikube/bin
I0408 18:27:29.204969  627201 config.go:182] Loaded profile config "functional-819351": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0408 18:27:29.205069  627201 config.go:182] Loaded profile config "functional-819351": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0408 18:27:29.205453  627201 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0408 18:27:29.205514  627201 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 18:27:29.221884  627201 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39969
I0408 18:27:29.222383  627201 main.go:141] libmachine: () Calling .GetVersion
I0408 18:27:29.223094  627201 main.go:141] libmachine: Using API Version  1
I0408 18:27:29.223124  627201 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 18:27:29.223463  627201 main.go:141] libmachine: () Calling .GetMachineName
I0408 18:27:29.223692  627201 main.go:141] libmachine: (functional-819351) Calling .GetState
I0408 18:27:29.225608  627201 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0408 18:27:29.225648  627201 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 18:27:29.241198  627201 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37567
I0408 18:27:29.241694  627201 main.go:141] libmachine: () Calling .GetVersion
I0408 18:27:29.242295  627201 main.go:141] libmachine: Using API Version  1
I0408 18:27:29.242347  627201 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 18:27:29.242694  627201 main.go:141] libmachine: () Calling .GetMachineName
I0408 18:27:29.242921  627201 main.go:141] libmachine: (functional-819351) Calling .DriverName
I0408 18:27:29.243121  627201 ssh_runner.go:195] Run: systemctl --version
I0408 18:27:29.243144  627201 main.go:141] libmachine: (functional-819351) Calling .GetSSHHostname
I0408 18:27:29.246021  627201 main.go:141] libmachine: (functional-819351) DBG | domain functional-819351 has defined MAC address 52:54:00:9e:57:25 in network mk-functional-819351
I0408 18:27:29.246412  627201 main.go:141] libmachine: (functional-819351) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:57:25", ip: ""} in network mk-functional-819351: {Iface:virbr1 ExpiryTime:2024-04-08 19:24:31 +0000 UTC Type:0 Mac:52:54:00:9e:57:25 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:functional-819351 Clientid:01:52:54:00:9e:57:25}
I0408 18:27:29.246436  627201 main.go:141] libmachine: (functional-819351) DBG | domain functional-819351 has defined IP address 192.168.39.116 and MAC address 52:54:00:9e:57:25 in network mk-functional-819351
I0408 18:27:29.246587  627201 main.go:141] libmachine: (functional-819351) Calling .GetSSHPort
I0408 18:27:29.246788  627201 main.go:141] libmachine: (functional-819351) Calling .GetSSHKeyPath
I0408 18:27:29.246983  627201 main.go:141] libmachine: (functional-819351) Calling .GetSSHUsername
I0408 18:27:29.247149  627201 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18585-610499/.minikube/machines/functional-819351/id_rsa Username:docker}
I0408 18:27:29.332620  627201 ssh_runner.go:195] Run: sudo crictl images --output json
I0408 18:27:29.383487  627201 main.go:141] libmachine: Making call to close driver server
I0408 18:27:29.383535  627201 main.go:141] libmachine: (functional-819351) Calling .Close
I0408 18:27:29.383869  627201 main.go:141] libmachine: Successfully made call to close driver server
I0408 18:27:29.383903  627201 main.go:141] libmachine: Making call to close connection to plugin binary
I0408 18:27:29.383912  627201 main.go:141] libmachine: Making call to close driver server
I0408 18:27:29.383920  627201 main.go:141] libmachine: (functional-819351) Calling .Close
I0408 18:27:29.383869  627201 main.go:141] libmachine: (functional-819351) DBG | Closing plugin on server side
I0408 18:27:29.384170  627201 main.go:141] libmachine: Successfully made call to close driver server
I0408 18:27:29.384186  627201 main.go:141] libmachine: Making call to close connection to plugin binary
I0408 18:27:29.384222  627201 main.go:141] libmachine: (functional-819351) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-819351 image ls --format json --alsologtostderr:
[{"id":"sha256:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"27755257"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"57236178"},{"id":"sha256:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10
f15ad67104"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.29.3"],"size":"33466661"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e","repoDigests":["docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e"],"repoTags":["docker.io/library/nginx:latest"],"size":"70534964"},{"id":"sha256:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533","repoDigests":["regis
try.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.29.3"],"size":"35100536"},{"id":"sha256:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392","repoDigests":["registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"],"repoTags":["registry.k8s.io/kube-proxy:v1.29.3"],"size":"28398741"},{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["doc
ker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:e9f7b4905cdc75ff116b1a129d9ffacdc01fa1faa677cbe2d9c586f6c5f8eacc","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-819351"],"size":"990"},{"id":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"18182961"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac
9dbed23fdb597335550d99ff8333d53c4136e889b3a"],"repoTags":["registry.k8s.io/kube-scheduler:v1.29.3"],"size":"18553260"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-819351"],"size":"10823156"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-819351 image ls --format json --alsologtostderr:
I0408 18:27:28.922028  627098 out.go:291] Setting OutFile to fd 1 ...
I0408 18:27:28.922338  627098 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 18:27:28.922359  627098 out.go:304] Setting ErrFile to fd 2...
I0408 18:27:28.922370  627098 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 18:27:28.922611  627098 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18585-610499/.minikube/bin
I0408 18:27:28.923325  627098 config.go:182] Loaded profile config "functional-819351": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0408 18:27:28.923448  627098 config.go:182] Loaded profile config "functional-819351": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0408 18:27:28.923896  627098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0408 18:27:28.923939  627098 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 18:27:28.939218  627098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42605
I0408 18:27:28.939762  627098 main.go:141] libmachine: () Calling .GetVersion
I0408 18:27:28.940409  627098 main.go:141] libmachine: Using API Version  1
I0408 18:27:28.940435  627098 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 18:27:28.940853  627098 main.go:141] libmachine: () Calling .GetMachineName
I0408 18:27:28.941100  627098 main.go:141] libmachine: (functional-819351) Calling .GetState
I0408 18:27:28.943198  627098 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0408 18:27:28.943246  627098 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 18:27:28.959129  627098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35189
I0408 18:27:28.959577  627098 main.go:141] libmachine: () Calling .GetVersion
I0408 18:27:28.960182  627098 main.go:141] libmachine: Using API Version  1
I0408 18:27:28.960209  627098 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 18:27:28.960543  627098 main.go:141] libmachine: () Calling .GetMachineName
I0408 18:27:28.960704  627098 main.go:141] libmachine: (functional-819351) Calling .DriverName
I0408 18:27:28.960929  627098 ssh_runner.go:195] Run: systemctl --version
I0408 18:27:28.960960  627098 main.go:141] libmachine: (functional-819351) Calling .GetSSHHostname
I0408 18:27:28.964577  627098 main.go:141] libmachine: (functional-819351) DBG | domain functional-819351 has defined MAC address 52:54:00:9e:57:25 in network mk-functional-819351
I0408 18:27:28.964977  627098 main.go:141] libmachine: (functional-819351) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:57:25", ip: ""} in network mk-functional-819351: {Iface:virbr1 ExpiryTime:2024-04-08 19:24:31 +0000 UTC Type:0 Mac:52:54:00:9e:57:25 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:functional-819351 Clientid:01:52:54:00:9e:57:25}
I0408 18:27:28.965008  627098 main.go:141] libmachine: (functional-819351) DBG | domain functional-819351 has defined IP address 192.168.39.116 and MAC address 52:54:00:9e:57:25 in network mk-functional-819351
I0408 18:27:28.965274  627098 main.go:141] libmachine: (functional-819351) Calling .GetSSHPort
I0408 18:27:28.965449  627098 main.go:141] libmachine: (functional-819351) Calling .GetSSHKeyPath
I0408 18:27:28.965558  627098 main.go:141] libmachine: (functional-819351) Calling .GetSSHUsername
I0408 18:27:28.965717  627098 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18585-610499/.minikube/machines/functional-819351/id_rsa Username:docker}
I0408 18:27:29.059416  627098 ssh_runner.go:195] Run: sudo crictl images --output json
I0408 18:27:29.127106  627098 main.go:141] libmachine: Making call to close driver server
I0408 18:27:29.127122  627098 main.go:141] libmachine: (functional-819351) Calling .Close
I0408 18:27:29.127434  627098 main.go:141] libmachine: Successfully made call to close driver server
I0408 18:27:29.127450  627098 main.go:141] libmachine: Making call to close connection to plugin binary
I0408 18:27:29.127459  627098 main.go:141] libmachine: Making call to close driver server
I0408 18:27:29.127469  627098 main.go:141] libmachine: (functional-819351) Calling .Close
I0408 18:27:29.127714  627098 main.go:141] libmachine: Successfully made call to close driver server
I0408 18:27:29.127735  627098 main.go:141] libmachine: Making call to close connection to plugin binary
W0408 18:27:29.130472  627098 root.go:91] failed to log command end to audit: failed to find a log row with id equals to cfa5aa51-d932-408d-bd24-6d8b7db01689
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-819351 image ls --format yaml --alsologtostderr:
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:e9f7b4905cdc75ff116b1a129d9ffacdc01fa1faa677cbe2d9c586f6c5f8eacc
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-819351
size: "990"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-819351
size: "10823156"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "57236178"
- id: sha256:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a
repoTags:
- registry.k8s.io/kube-scheduler:v1.29.3
size: "18553260"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104
repoTags:
- registry.k8s.io/kube-controller-manager:v1.29.3
size: "33466661"
- id: sha256:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392
repoDigests:
- registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863
repoTags:
- registry.k8s.io/kube-proxy:v1.29.3
size: "28398741"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "27755257"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e
repoDigests:
- docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e
repoTags:
- docker.io/library/nginx:latest
size: "70534964"
- id: sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "18182961"
- id: sha256:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c
repoTags:
- registry.k8s.io/kube-apiserver:v1.29.3
size: "35100536"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-819351 image ls --format yaml --alsologtostderr:
I0408 18:27:28.930460  627096 out.go:291] Setting OutFile to fd 1 ...
I0408 18:27:28.930605  627096 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 18:27:28.930615  627096 out.go:304] Setting ErrFile to fd 2...
I0408 18:27:28.930620  627096 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 18:27:28.930810  627096 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18585-610499/.minikube/bin
I0408 18:27:28.931328  627096 config.go:182] Loaded profile config "functional-819351": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0408 18:27:28.931429  627096 config.go:182] Loaded profile config "functional-819351": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0408 18:27:28.931824  627096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0408 18:27:28.931869  627096 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 18:27:28.948197  627096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34737
I0408 18:27:28.948985  627096 main.go:141] libmachine: () Calling .GetVersion
I0408 18:27:28.952328  627096 main.go:141] libmachine: Using API Version  1
I0408 18:27:28.952375  627096 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 18:27:28.952847  627096 main.go:141] libmachine: () Calling .GetMachineName
I0408 18:27:28.953047  627096 main.go:141] libmachine: (functional-819351) Calling .GetState
I0408 18:27:28.955591  627096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0408 18:27:28.955643  627096 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 18:27:28.972360  627096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35667
I0408 18:27:28.972843  627096 main.go:141] libmachine: () Calling .GetVersion
I0408 18:27:28.973559  627096 main.go:141] libmachine: Using API Version  1
I0408 18:27:28.973609  627096 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 18:27:28.974027  627096 main.go:141] libmachine: () Calling .GetMachineName
I0408 18:27:28.974231  627096 main.go:141] libmachine: (functional-819351) Calling .DriverName
I0408 18:27:28.974504  627096 ssh_runner.go:195] Run: systemctl --version
I0408 18:27:28.974535  627096 main.go:141] libmachine: (functional-819351) Calling .GetSSHHostname
I0408 18:27:28.977857  627096 main.go:141] libmachine: (functional-819351) DBG | domain functional-819351 has defined MAC address 52:54:00:9e:57:25 in network mk-functional-819351
I0408 18:27:28.978237  627096 main.go:141] libmachine: (functional-819351) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:57:25", ip: ""} in network mk-functional-819351: {Iface:virbr1 ExpiryTime:2024-04-08 19:24:31 +0000 UTC Type:0 Mac:52:54:00:9e:57:25 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:functional-819351 Clientid:01:52:54:00:9e:57:25}
I0408 18:27:28.978262  627096 main.go:141] libmachine: (functional-819351) DBG | domain functional-819351 has defined IP address 192.168.39.116 and MAC address 52:54:00:9e:57:25 in network mk-functional-819351
I0408 18:27:28.978496  627096 main.go:141] libmachine: (functional-819351) Calling .GetSSHPort
I0408 18:27:28.978713  627096 main.go:141] libmachine: (functional-819351) Calling .GetSSHKeyPath
I0408 18:27:28.978871  627096 main.go:141] libmachine: (functional-819351) Calling .GetSSHUsername
I0408 18:27:28.979072  627096 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18585-610499/.minikube/machines/functional-819351/id_rsa Username:docker}
I0408 18:27:29.084982  627096 ssh_runner.go:195] Run: sudo crictl images --output json
I0408 18:27:29.168453  627096 main.go:141] libmachine: Making call to close driver server
I0408 18:27:29.168472  627096 main.go:141] libmachine: (functional-819351) Calling .Close
I0408 18:27:29.168785  627096 main.go:141] libmachine: Successfully made call to close driver server
I0408 18:27:29.168808  627096 main.go:141] libmachine: Making call to close connection to plugin binary
I0408 18:27:29.168816  627096 main.go:141] libmachine: Making call to close driver server
I0408 18:27:29.168820  627096 main.go:141] libmachine: (functional-819351) DBG | Closing plugin on server side
I0408 18:27:29.168824  627096 main.go:141] libmachine: (functional-819351) Calling .Close
I0408 18:27:29.169104  627096 main.go:141] libmachine: Successfully made call to close driver server
I0408 18:27:29.169126  627096 main.go:141] libmachine: Making call to close connection to plugin binary
W0408 18:27:29.171704  627096 root.go:91] failed to log command end to audit: failed to find a log row with id equals to fbb69dc6-2bf2-4d76-ad43-df1366ce9655
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-819351 ssh pgrep buildkitd: exit status 1 (254.69309ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 image build -t localhost/my-image:functional-819351 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-819351 image build -t localhost/my-image:functional-819351 testdata/build --alsologtostderr: (2.370319623s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-819351 image build -t localhost/my-image:functional-819351 testdata/build --alsologtostderr:
I0408 18:27:29.169142  627191 out.go:291] Setting OutFile to fd 1 ...
I0408 18:27:29.169305  627191 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 18:27:29.169314  627191 out.go:304] Setting ErrFile to fd 2...
I0408 18:27:29.169319  627191 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 18:27:29.169566  627191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18585-610499/.minikube/bin
I0408 18:27:29.170327  627191 config.go:182] Loaded profile config "functional-819351": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0408 18:27:29.171003  627191 config.go:182] Loaded profile config "functional-819351": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0408 18:27:29.171343  627191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0408 18:27:29.171401  627191 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 18:27:29.188476  627191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34731
I0408 18:27:29.189062  627191 main.go:141] libmachine: () Calling .GetVersion
I0408 18:27:29.189666  627191 main.go:141] libmachine: Using API Version  1
I0408 18:27:29.189690  627191 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 18:27:29.190140  627191 main.go:141] libmachine: () Calling .GetMachineName
I0408 18:27:29.190347  627191 main.go:141] libmachine: (functional-819351) Calling .GetState
I0408 18:27:29.192359  627191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0408 18:27:29.192396  627191 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 18:27:29.208866  627191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41333
I0408 18:27:29.209532  627191 main.go:141] libmachine: () Calling .GetVersion
I0408 18:27:29.209992  627191 main.go:141] libmachine: Using API Version  1
I0408 18:27:29.210007  627191 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 18:27:29.210494  627191 main.go:141] libmachine: () Calling .GetMachineName
I0408 18:27:29.210728  627191 main.go:141] libmachine: (functional-819351) Calling .DriverName
I0408 18:27:29.210938  627191 ssh_runner.go:195] Run: systemctl --version
I0408 18:27:29.210968  627191 main.go:141] libmachine: (functional-819351) Calling .GetSSHHostname
I0408 18:27:29.213982  627191 main.go:141] libmachine: (functional-819351) DBG | domain functional-819351 has defined MAC address 52:54:00:9e:57:25 in network mk-functional-819351
I0408 18:27:29.214399  627191 main.go:141] libmachine: (functional-819351) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:57:25", ip: ""} in network mk-functional-819351: {Iface:virbr1 ExpiryTime:2024-04-08 19:24:31 +0000 UTC Type:0 Mac:52:54:00:9e:57:25 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:functional-819351 Clientid:01:52:54:00:9e:57:25}
I0408 18:27:29.214441  627191 main.go:141] libmachine: (functional-819351) DBG | domain functional-819351 has defined IP address 192.168.39.116 and MAC address 52:54:00:9e:57:25 in network mk-functional-819351
I0408 18:27:29.214620  627191 main.go:141] libmachine: (functional-819351) Calling .GetSSHPort
I0408 18:27:29.214874  627191 main.go:141] libmachine: (functional-819351) Calling .GetSSHKeyPath
I0408 18:27:29.215049  627191 main.go:141] libmachine: (functional-819351) Calling .GetSSHUsername
I0408 18:27:29.215205  627191 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18585-610499/.minikube/machines/functional-819351/id_rsa Username:docker}
I0408 18:27:29.298545  627191 build_images.go:161] Building image from path: /tmp/build.1066789471.tar
I0408 18:27:29.298614  627191 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0408 18:27:29.309803  627191 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1066789471.tar
I0408 18:27:29.315220  627191 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1066789471.tar: stat -c "%s %y" /var/lib/minikube/build/build.1066789471.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1066789471.tar': No such file or directory
I0408 18:27:29.315254  627191 ssh_runner.go:362] scp /tmp/build.1066789471.tar --> /var/lib/minikube/build/build.1066789471.tar (3072 bytes)
I0408 18:27:29.350602  627191 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1066789471
I0408 18:27:29.367817  627191 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1066789471 -xf /var/lib/minikube/build/build.1066789471.tar
I0408 18:27:29.389239  627191 containerd.go:394] Building image: /var/lib/minikube/build/build.1066789471
I0408 18:27:29.389315  627191 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1066789471 --local dockerfile=/var/lib/minikube/build/build.1066789471 --output type=image,name=localhost/my-image:functional-819351
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:5ad029404819a20d4b9450572b7ac76afa3f7439a14d060fbb419be93e48b7f0
#8 exporting manifest sha256:5ad029404819a20d4b9450572b7ac76afa3f7439a14d060fbb419be93e48b7f0 0.0s done
#8 exporting config sha256:1fc48b4e76e6e8a8e82dced7836365b0a701a3d06a7b4c3f50a5f3222d3febde 0.0s done
#8 naming to localhost/my-image:functional-819351 done
#8 DONE 0.2s
I0408 18:27:31.440294  627191 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1066789471 --local dockerfile=/var/lib/minikube/build/build.1066789471 --output type=image,name=localhost/my-image:functional-819351: (2.050945602s)
I0408 18:27:31.440383  627191 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1066789471
I0408 18:27:31.458592  627191 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1066789471.tar
I0408 18:27:31.469985  627191 build_images.go:217] Built localhost/my-image:functional-819351 from /tmp/build.1066789471.tar
I0408 18:27:31.470027  627191 build_images.go:133] succeeded building to: functional-819351
I0408 18:27:31.470032  627191 build_images.go:134] failed building to: 
I0408 18:27:31.470099  627191 main.go:141] libmachine: Making call to close driver server
I0408 18:27:31.470121  627191 main.go:141] libmachine: (functional-819351) Calling .Close
I0408 18:27:31.470454  627191 main.go:141] libmachine: Successfully made call to close driver server
I0408 18:27:31.470476  627191 main.go:141] libmachine: Making call to close connection to plugin binary
I0408 18:27:31.470485  627191 main.go:141] libmachine: Making call to close driver server
I0408 18:27:31.470493  627191 main.go:141] libmachine: (functional-819351) Calling .Close
I0408 18:27:31.470751  627191 main.go:141] libmachine: Successfully made call to close driver server
I0408 18:27:31.470772  627191 main.go:141] libmachine: Making call to close connection to plugin binary
I0408 18:27:31.470779  627191 main.go:141] libmachine: (functional-819351) DBG | Closing plugin on server side
W0408 18:27:31.472944  627191 root.go:91] failed to log command end to audit: failed to find a log row with id equals to 1c799cc8-c031-4d34-b231-7f3e6c6e58ce
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-819351
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 image load --daemon gcr.io/google-containers/addon-resizer:functional-819351 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-819351 image load --daemon gcr.io/google-containers/addon-resizer:functional-819351 --alsologtostderr: (5.132106407s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 image load --daemon gcr.io/google-containers/addon-resizer:functional-819351 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-819351 image load --daemon gcr.io/google-containers/addon-resizer:functional-819351 --alsologtostderr: (3.071072769s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-819351
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 image load --daemon gcr.io/google-containers/addon-resizer:functional-819351 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-819351 image load --daemon gcr.io/google-containers/addon-resizer:functional-819351 --alsologtostderr: (4.116051745s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 image save gcr.io/google-containers/addon-resizer:functional-819351 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-819351 image save gcr.io/google-containers/addon-resizer:functional-819351 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.111021408s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 image rm gcr.io/google-containers/addon-resizer:functional-819351 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-819351 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.344449209s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-819351
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-819351 image save --daemon gcr.io/google-containers/addon-resizer:functional-819351 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-819351 image save --daemon gcr.io/google-containers/addon-resizer:functional-819351 --alsologtostderr: (1.130046737s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-819351
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.16s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-819351
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-819351
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-819351
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (307.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-436168 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0408 18:28:02.375700  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.crt: no such file or directory
E0408 18:30:18.529576  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.crt: no such file or directory
E0408 18:30:46.215994  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.crt: no such file or directory
E0408 18:31:43.949045  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/functional-819351/client.crt: no such file or directory
E0408 18:31:43.954351  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/functional-819351/client.crt: no such file or directory
E0408 18:31:43.964635  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/functional-819351/client.crt: no such file or directory
E0408 18:31:43.984911  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/functional-819351/client.crt: no such file or directory
E0408 18:31:44.025176  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/functional-819351/client.crt: no such file or directory
E0408 18:31:44.105569  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/functional-819351/client.crt: no such file or directory
E0408 18:31:44.265840  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/functional-819351/client.crt: no such file or directory
E0408 18:31:44.585998  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/functional-819351/client.crt: no such file or directory
E0408 18:31:45.226164  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/functional-819351/client.crt: no such file or directory
E0408 18:31:46.506966  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/functional-819351/client.crt: no such file or directory
E0408 18:31:49.067555  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/functional-819351/client.crt: no such file or directory
E0408 18:31:54.187927  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/functional-819351/client.crt: no such file or directory
E0408 18:32:04.428113  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/functional-819351/client.crt: no such file or directory
E0408 18:32:24.908869  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/functional-819351/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-436168 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (5m6.693041748s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (307.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-436168 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-436168 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-436168 -- rollout status deployment/busybox: (2.363700931s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-436168 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-436168 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-436168 -- exec busybox-7fdf7869d9-2z59x -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-436168 -- exec busybox-7fdf7869d9-fj4tc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-436168 -- exec busybox-7fdf7869d9-zglgq -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-436168 -- exec busybox-7fdf7869d9-2z59x -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-436168 -- exec busybox-7fdf7869d9-fj4tc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-436168 -- exec busybox-7fdf7869d9-zglgq -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-436168 -- exec busybox-7fdf7869d9-2z59x -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-436168 -- exec busybox-7fdf7869d9-fj4tc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-436168 -- exec busybox-7fdf7869d9-zglgq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-436168 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-436168 -- exec busybox-7fdf7869d9-2z59x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-436168 -- exec busybox-7fdf7869d9-2z59x -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-436168 -- exec busybox-7fdf7869d9-fj4tc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-436168 -- exec busybox-7fdf7869d9-fj4tc -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-436168 -- exec busybox-7fdf7869d9-zglgq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-436168 -- exec busybox-7fdf7869d9-zglgq -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (49.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-436168 -v=7 --alsologtostderr
E0408 18:33:05.870049  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/functional-819351/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-436168 -v=7 --alsologtostderr: (48.145366258s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (49.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-436168 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 cp testdata/cp-test.txt ha-436168:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 ssh -n ha-436168 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 cp ha-436168:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile434962945/001/cp-test_ha-436168.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 ssh -n ha-436168 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 cp ha-436168:/home/docker/cp-test.txt ha-436168-m02:/home/docker/cp-test_ha-436168_ha-436168-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 ssh -n ha-436168 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 ssh -n ha-436168-m02 "sudo cat /home/docker/cp-test_ha-436168_ha-436168-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 cp ha-436168:/home/docker/cp-test.txt ha-436168-m03:/home/docker/cp-test_ha-436168_ha-436168-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 ssh -n ha-436168 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 ssh -n ha-436168-m03 "sudo cat /home/docker/cp-test_ha-436168_ha-436168-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 cp ha-436168:/home/docker/cp-test.txt ha-436168-m04:/home/docker/cp-test_ha-436168_ha-436168-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 ssh -n ha-436168 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 ssh -n ha-436168-m04 "sudo cat /home/docker/cp-test_ha-436168_ha-436168-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 cp testdata/cp-test.txt ha-436168-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 ssh -n ha-436168-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 cp ha-436168-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile434962945/001/cp-test_ha-436168-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 ssh -n ha-436168-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 cp ha-436168-m02:/home/docker/cp-test.txt ha-436168:/home/docker/cp-test_ha-436168-m02_ha-436168.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 ssh -n ha-436168-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 ssh -n ha-436168 "sudo cat /home/docker/cp-test_ha-436168-m02_ha-436168.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 cp ha-436168-m02:/home/docker/cp-test.txt ha-436168-m03:/home/docker/cp-test_ha-436168-m02_ha-436168-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 ssh -n ha-436168-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 ssh -n ha-436168-m03 "sudo cat /home/docker/cp-test_ha-436168-m02_ha-436168-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 cp ha-436168-m02:/home/docker/cp-test.txt ha-436168-m04:/home/docker/cp-test_ha-436168-m02_ha-436168-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 ssh -n ha-436168-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 ssh -n ha-436168-m04 "sudo cat /home/docker/cp-test_ha-436168-m02_ha-436168-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 cp testdata/cp-test.txt ha-436168-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 ssh -n ha-436168-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 cp ha-436168-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile434962945/001/cp-test_ha-436168-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 ssh -n ha-436168-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 cp ha-436168-m03:/home/docker/cp-test.txt ha-436168:/home/docker/cp-test_ha-436168-m03_ha-436168.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 ssh -n ha-436168-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 ssh -n ha-436168 "sudo cat /home/docker/cp-test_ha-436168-m03_ha-436168.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 cp ha-436168-m03:/home/docker/cp-test.txt ha-436168-m02:/home/docker/cp-test_ha-436168-m03_ha-436168-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 ssh -n ha-436168-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 ssh -n ha-436168-m02 "sudo cat /home/docker/cp-test_ha-436168-m03_ha-436168-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 cp ha-436168-m03:/home/docker/cp-test.txt ha-436168-m04:/home/docker/cp-test_ha-436168-m03_ha-436168-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 ssh -n ha-436168-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 ssh -n ha-436168-m04 "sudo cat /home/docker/cp-test_ha-436168-m03_ha-436168-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 cp testdata/cp-test.txt ha-436168-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 ssh -n ha-436168-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 cp ha-436168-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile434962945/001/cp-test_ha-436168-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 ssh -n ha-436168-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 cp ha-436168-m04:/home/docker/cp-test.txt ha-436168:/home/docker/cp-test_ha-436168-m04_ha-436168.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 ssh -n ha-436168-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 ssh -n ha-436168 "sudo cat /home/docker/cp-test_ha-436168-m04_ha-436168.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 cp ha-436168-m04:/home/docker/cp-test.txt ha-436168-m02:/home/docker/cp-test_ha-436168-m04_ha-436168-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 ssh -n ha-436168-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 ssh -n ha-436168-m02 "sudo cat /home/docker/cp-test_ha-436168-m04_ha-436168-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 cp ha-436168-m04:/home/docker/cp-test.txt ha-436168-m03:/home/docker/cp-test_ha-436168-m04_ha-436168-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 ssh -n ha-436168-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 ssh -n ha-436168-m03 "sudo cat /home/docker/cp-test_ha-436168-m04_ha-436168-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (93.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 node stop m02 -v=7 --alsologtostderr
E0408 18:34:27.791332  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/functional-819351/client.crt: no such file or directory
E0408 18:35:18.529198  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-436168 node stop m02 -v=7 --alsologtostderr: (1m32.466828735s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-436168 status -v=7 --alsologtostderr: exit status 7 (687.406518ms)

                                                
                                                
-- stdout --
	ha-436168
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-436168-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-436168-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-436168-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 18:35:22.795876  632049 out.go:291] Setting OutFile to fd 1 ...
	I0408 18:35:22.796021  632049 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 18:35:22.796033  632049 out.go:304] Setting ErrFile to fd 2...
	I0408 18:35:22.796039  632049 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 18:35:22.796279  632049 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18585-610499/.minikube/bin
	I0408 18:35:22.796463  632049 out.go:298] Setting JSON to false
	I0408 18:35:22.796488  632049 mustload.go:65] Loading cluster: ha-436168
	I0408 18:35:22.796616  632049 notify.go:220] Checking for updates...
	I0408 18:35:22.796856  632049 config.go:182] Loaded profile config "ha-436168": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 18:35:22.796871  632049 status.go:255] checking status of ha-436168 ...
	I0408 18:35:22.797323  632049 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:35:22.797376  632049 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:35:22.815899  632049 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37215
	I0408 18:35:22.816394  632049 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:35:22.817108  632049 main.go:141] libmachine: Using API Version  1
	I0408 18:35:22.817133  632049 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:35:22.817563  632049 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:35:22.817788  632049 main.go:141] libmachine: (ha-436168) Calling .GetState
	I0408 18:35:22.819625  632049 status.go:330] ha-436168 host status = "Running" (err=<nil>)
	I0408 18:35:22.819651  632049 host.go:66] Checking if "ha-436168" exists ...
	I0408 18:35:22.820081  632049 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:35:22.820119  632049 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:35:22.834677  632049 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40529
	I0408 18:35:22.835086  632049 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:35:22.835627  632049 main.go:141] libmachine: Using API Version  1
	I0408 18:35:22.835650  632049 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:35:22.836019  632049 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:35:22.836194  632049 main.go:141] libmachine: (ha-436168) Calling .GetIP
	I0408 18:35:22.839061  632049 main.go:141] libmachine: (ha-436168) DBG | domain ha-436168 has defined MAC address 52:54:00:bd:81:e0 in network mk-ha-436168
	I0408 18:35:22.839648  632049 main.go:141] libmachine: (ha-436168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:81:e0", ip: ""} in network mk-ha-436168: {Iface:virbr1 ExpiryTime:2024-04-08 19:27:48 +0000 UTC Type:0 Mac:52:54:00:bd:81:e0 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-436168 Clientid:01:52:54:00:bd:81:e0}
	I0408 18:35:22.839680  632049 main.go:141] libmachine: (ha-436168) DBG | domain ha-436168 has defined IP address 192.168.39.156 and MAC address 52:54:00:bd:81:e0 in network mk-ha-436168
	I0408 18:35:22.839801  632049 host.go:66] Checking if "ha-436168" exists ...
	I0408 18:35:22.840227  632049 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:35:22.840267  632049 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:35:22.854531  632049 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44317
	I0408 18:35:22.854892  632049 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:35:22.855459  632049 main.go:141] libmachine: Using API Version  1
	I0408 18:35:22.855482  632049 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:35:22.855829  632049 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:35:22.856095  632049 main.go:141] libmachine: (ha-436168) Calling .DriverName
	I0408 18:35:22.856295  632049 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 18:35:22.856317  632049 main.go:141] libmachine: (ha-436168) Calling .GetSSHHostname
	I0408 18:35:22.859055  632049 main.go:141] libmachine: (ha-436168) DBG | domain ha-436168 has defined MAC address 52:54:00:bd:81:e0 in network mk-ha-436168
	I0408 18:35:22.859550  632049 main.go:141] libmachine: (ha-436168) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:81:e0", ip: ""} in network mk-ha-436168: {Iface:virbr1 ExpiryTime:2024-04-08 19:27:48 +0000 UTC Type:0 Mac:52:54:00:bd:81:e0 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:ha-436168 Clientid:01:52:54:00:bd:81:e0}
	I0408 18:35:22.859581  632049 main.go:141] libmachine: (ha-436168) DBG | domain ha-436168 has defined IP address 192.168.39.156 and MAC address 52:54:00:bd:81:e0 in network mk-ha-436168
	I0408 18:35:22.859757  632049 main.go:141] libmachine: (ha-436168) Calling .GetSSHPort
	I0408 18:35:22.859939  632049 main.go:141] libmachine: (ha-436168) Calling .GetSSHKeyPath
	I0408 18:35:22.860092  632049 main.go:141] libmachine: (ha-436168) Calling .GetSSHUsername
	I0408 18:35:22.860213  632049 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18585-610499/.minikube/machines/ha-436168/id_rsa Username:docker}
	I0408 18:35:22.947568  632049 ssh_runner.go:195] Run: systemctl --version
	I0408 18:35:22.956737  632049 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 18:35:22.977233  632049 kubeconfig.go:125] found "ha-436168" server: "https://192.168.39.254:8443"
	I0408 18:35:22.977277  632049 api_server.go:166] Checking apiserver status ...
	I0408 18:35:22.977314  632049 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 18:35:22.997149  632049 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1115/cgroup
	W0408 18:35:23.010225  632049 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1115/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0408 18:35:23.010277  632049 ssh_runner.go:195] Run: ls
	I0408 18:35:23.015311  632049 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0408 18:35:23.026981  632049 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0408 18:35:23.027018  632049 status.go:422] ha-436168 apiserver status = Running (err=<nil>)
	I0408 18:35:23.027033  632049 status.go:257] ha-436168 status: &{Name:ha-436168 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 18:35:23.027067  632049 status.go:255] checking status of ha-436168-m02 ...
	I0408 18:35:23.027515  632049 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:35:23.027585  632049 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:35:23.042333  632049 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37821
	I0408 18:35:23.042730  632049 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:35:23.043181  632049 main.go:141] libmachine: Using API Version  1
	I0408 18:35:23.043207  632049 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:35:23.043564  632049 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:35:23.043785  632049 main.go:141] libmachine: (ha-436168-m02) Calling .GetState
	I0408 18:35:23.045157  632049 status.go:330] ha-436168-m02 host status = "Stopped" (err=<nil>)
	I0408 18:35:23.045173  632049 status.go:343] host is not running, skipping remaining checks
	I0408 18:35:23.045181  632049 status.go:257] ha-436168-m02 status: &{Name:ha-436168-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 18:35:23.045200  632049 status.go:255] checking status of ha-436168-m03 ...
	I0408 18:35:23.045476  632049 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:35:23.045530  632049 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:35:23.059605  632049 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39255
	I0408 18:35:23.060061  632049 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:35:23.060595  632049 main.go:141] libmachine: Using API Version  1
	I0408 18:35:23.060627  632049 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:35:23.060933  632049 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:35:23.061106  632049 main.go:141] libmachine: (ha-436168-m03) Calling .GetState
	I0408 18:35:23.062732  632049 status.go:330] ha-436168-m03 host status = "Running" (err=<nil>)
	I0408 18:35:23.062759  632049 host.go:66] Checking if "ha-436168-m03" exists ...
	I0408 18:35:23.063025  632049 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:35:23.063058  632049 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:35:23.078872  632049 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36723
	I0408 18:35:23.079409  632049 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:35:23.079870  632049 main.go:141] libmachine: Using API Version  1
	I0408 18:35:23.079891  632049 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:35:23.080228  632049 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:35:23.080434  632049 main.go:141] libmachine: (ha-436168-m03) Calling .GetIP
	I0408 18:35:23.083368  632049 main.go:141] libmachine: (ha-436168-m03) DBG | domain ha-436168-m03 has defined MAC address 52:54:00:80:96:47 in network mk-ha-436168
	I0408 18:35:23.083782  632049 main.go:141] libmachine: (ha-436168-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:96:47", ip: ""} in network mk-ha-436168: {Iface:virbr1 ExpiryTime:2024-04-08 19:29:57 +0000 UTC Type:0 Mac:52:54:00:80:96:47 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:ha-436168-m03 Clientid:01:52:54:00:80:96:47}
	I0408 18:35:23.083808  632049 main.go:141] libmachine: (ha-436168-m03) DBG | domain ha-436168-m03 has defined IP address 192.168.39.213 and MAC address 52:54:00:80:96:47 in network mk-ha-436168
	I0408 18:35:23.083961  632049 host.go:66] Checking if "ha-436168-m03" exists ...
	I0408 18:35:23.084243  632049 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:35:23.084280  632049 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:35:23.099416  632049 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33209
	I0408 18:35:23.099825  632049 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:35:23.100342  632049 main.go:141] libmachine: Using API Version  1
	I0408 18:35:23.100364  632049 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:35:23.100659  632049 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:35:23.100996  632049 main.go:141] libmachine: (ha-436168-m03) Calling .DriverName
	I0408 18:35:23.101178  632049 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 18:35:23.101198  632049 main.go:141] libmachine: (ha-436168-m03) Calling .GetSSHHostname
	I0408 18:35:23.103741  632049 main.go:141] libmachine: (ha-436168-m03) DBG | domain ha-436168-m03 has defined MAC address 52:54:00:80:96:47 in network mk-ha-436168
	I0408 18:35:23.104171  632049 main.go:141] libmachine: (ha-436168-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:96:47", ip: ""} in network mk-ha-436168: {Iface:virbr1 ExpiryTime:2024-04-08 19:29:57 +0000 UTC Type:0 Mac:52:54:00:80:96:47 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:ha-436168-m03 Clientid:01:52:54:00:80:96:47}
	I0408 18:35:23.104201  632049 main.go:141] libmachine: (ha-436168-m03) DBG | domain ha-436168-m03 has defined IP address 192.168.39.213 and MAC address 52:54:00:80:96:47 in network mk-ha-436168
	I0408 18:35:23.104340  632049 main.go:141] libmachine: (ha-436168-m03) Calling .GetSSHPort
	I0408 18:35:23.104528  632049 main.go:141] libmachine: (ha-436168-m03) Calling .GetSSHKeyPath
	I0408 18:35:23.104684  632049 main.go:141] libmachine: (ha-436168-m03) Calling .GetSSHUsername
	I0408 18:35:23.104807  632049 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18585-610499/.minikube/machines/ha-436168-m03/id_rsa Username:docker}
	I0408 18:35:23.198516  632049 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 18:35:23.219374  632049 kubeconfig.go:125] found "ha-436168" server: "https://192.168.39.254:8443"
	I0408 18:35:23.219420  632049 api_server.go:166] Checking apiserver status ...
	I0408 18:35:23.219465  632049 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 18:35:23.235108  632049 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2012/cgroup
	W0408 18:35:23.245391  632049 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2012/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0408 18:35:23.245442  632049 ssh_runner.go:195] Run: ls
	I0408 18:35:23.250513  632049 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0408 18:35:23.254991  632049 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0408 18:35:23.255013  632049 status.go:422] ha-436168-m03 apiserver status = Running (err=<nil>)
	I0408 18:35:23.255021  632049 status.go:257] ha-436168-m03 status: &{Name:ha-436168-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 18:35:23.255036  632049 status.go:255] checking status of ha-436168-m04 ...
	I0408 18:35:23.255329  632049 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:35:23.255385  632049 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:35:23.270345  632049 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45875
	I0408 18:35:23.270715  632049 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:35:23.271283  632049 main.go:141] libmachine: Using API Version  1
	I0408 18:35:23.271309  632049 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:35:23.271696  632049 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:35:23.271901  632049 main.go:141] libmachine: (ha-436168-m04) Calling .GetState
	I0408 18:35:23.273527  632049 status.go:330] ha-436168-m04 host status = "Running" (err=<nil>)
	I0408 18:35:23.273547  632049 host.go:66] Checking if "ha-436168-m04" exists ...
	I0408 18:35:23.273848  632049 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:35:23.273888  632049 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:35:23.288337  632049 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42425
	I0408 18:35:23.288951  632049 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:35:23.289479  632049 main.go:141] libmachine: Using API Version  1
	I0408 18:35:23.289501  632049 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:35:23.289891  632049 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:35:23.290096  632049 main.go:141] libmachine: (ha-436168-m04) Calling .GetIP
	I0408 18:35:23.292811  632049 main.go:141] libmachine: (ha-436168-m04) DBG | domain ha-436168-m04 has defined MAC address 52:54:00:e9:91:d2 in network mk-ha-436168
	I0408 18:35:23.293214  632049 main.go:141] libmachine: (ha-436168-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:91:d2", ip: ""} in network mk-ha-436168: {Iface:virbr1 ExpiryTime:2024-04-08 19:33:03 +0000 UTC Type:0 Mac:52:54:00:e9:91:d2 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-436168-m04 Clientid:01:52:54:00:e9:91:d2}
	I0408 18:35:23.293250  632049 main.go:141] libmachine: (ha-436168-m04) DBG | domain ha-436168-m04 has defined IP address 192.168.39.146 and MAC address 52:54:00:e9:91:d2 in network mk-ha-436168
	I0408 18:35:23.293361  632049 host.go:66] Checking if "ha-436168-m04" exists ...
	I0408 18:35:23.293762  632049 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:35:23.293805  632049 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:35:23.309072  632049 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35213
	I0408 18:35:23.309557  632049 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:35:23.310017  632049 main.go:141] libmachine: Using API Version  1
	I0408 18:35:23.310043  632049 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:35:23.310485  632049 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:35:23.310732  632049 main.go:141] libmachine: (ha-436168-m04) Calling .DriverName
	I0408 18:35:23.310996  632049 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 18:35:23.311021  632049 main.go:141] libmachine: (ha-436168-m04) Calling .GetSSHHostname
	I0408 18:35:23.313631  632049 main.go:141] libmachine: (ha-436168-m04) DBG | domain ha-436168-m04 has defined MAC address 52:54:00:e9:91:d2 in network mk-ha-436168
	I0408 18:35:23.314051  632049 main.go:141] libmachine: (ha-436168-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:91:d2", ip: ""} in network mk-ha-436168: {Iface:virbr1 ExpiryTime:2024-04-08 19:33:03 +0000 UTC Type:0 Mac:52:54:00:e9:91:d2 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-436168-m04 Clientid:01:52:54:00:e9:91:d2}
	I0408 18:35:23.314081  632049 main.go:141] libmachine: (ha-436168-m04) DBG | domain ha-436168-m04 has defined IP address 192.168.39.146 and MAC address 52:54:00:e9:91:d2 in network mk-ha-436168
	I0408 18:35:23.314238  632049 main.go:141] libmachine: (ha-436168-m04) Calling .GetSSHPort
	I0408 18:35:23.314424  632049 main.go:141] libmachine: (ha-436168-m04) Calling .GetSSHKeyPath
	I0408 18:35:23.314589  632049 main.go:141] libmachine: (ha-436168-m04) Calling .GetSSHUsername
	I0408 18:35:23.314726  632049 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18585-610499/.minikube/machines/ha-436168-m04/id_rsa Username:docker}
	I0408 18:35:23.404820  632049 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 18:35:23.423321  632049 status.go:257] ha-436168-m04 status: &{Name:ha-436168-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (93.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (45.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-436168 node start m02 -v=7 --alsologtostderr: (44.310849013s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (45.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (458.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-436168 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-436168 -v=7 --alsologtostderr
E0408 18:36:43.949123  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/functional-819351/client.crt: no such file or directory
E0408 18:37:11.631735  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/functional-819351/client.crt: no such file or directory
E0408 18:40:18.529111  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-436168 -v=7 --alsologtostderr: (4m38.753619052s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-436168 --wait=true -v=7 --alsologtostderr
E0408 18:41:41.577232  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.crt: no such file or directory
E0408 18:41:43.948792  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/functional-819351/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-436168 --wait=true -v=7 --alsologtostderr: (2m59.781822414s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-436168
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (458.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (8.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-436168 node delete m03 -v=7 --alsologtostderr: (7.41133366s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (8.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (275.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 stop -v=7 --alsologtostderr
E0408 18:45:18.529858  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.crt: no such file or directory
E0408 18:46:43.948617  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/functional-819351/client.crt: no such file or directory
E0408 18:48:06.991954  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/functional-819351/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-436168 stop -v=7 --alsologtostderr: (4m35.686374972s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-436168 status -v=7 --alsologtostderr: exit status 7 (123.484693ms)

                                                
                                                
-- stdout --
	ha-436168
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-436168-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-436168-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 18:48:32.706878  635950 out.go:291] Setting OutFile to fd 1 ...
	I0408 18:48:32.707021  635950 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 18:48:32.707037  635950 out.go:304] Setting ErrFile to fd 2...
	I0408 18:48:32.707044  635950 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 18:48:32.707255  635950 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18585-610499/.minikube/bin
	I0408 18:48:32.707447  635950 out.go:298] Setting JSON to false
	I0408 18:48:32.707476  635950 mustload.go:65] Loading cluster: ha-436168
	I0408 18:48:32.707592  635950 notify.go:220] Checking for updates...
	I0408 18:48:32.707936  635950 config.go:182] Loaded profile config "ha-436168": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 18:48:32.707953  635950 status.go:255] checking status of ha-436168 ...
	I0408 18:48:32.708356  635950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:48:32.708427  635950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:48:32.730000  635950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42401
	I0408 18:48:32.730498  635950 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:48:32.731052  635950 main.go:141] libmachine: Using API Version  1
	I0408 18:48:32.731073  635950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:48:32.731476  635950 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:48:32.731715  635950 main.go:141] libmachine: (ha-436168) Calling .GetState
	I0408 18:48:32.733318  635950 status.go:330] ha-436168 host status = "Stopped" (err=<nil>)
	I0408 18:48:32.733327  635950 status.go:343] host is not running, skipping remaining checks
	I0408 18:48:32.733334  635950 status.go:257] ha-436168 status: &{Name:ha-436168 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 18:48:32.733375  635950 status.go:255] checking status of ha-436168-m02 ...
	I0408 18:48:32.733662  635950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:48:32.733702  635950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:48:32.748382  635950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46601
	I0408 18:48:32.748905  635950 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:48:32.749378  635950 main.go:141] libmachine: Using API Version  1
	I0408 18:48:32.749400  635950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:48:32.749733  635950 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:48:32.749912  635950 main.go:141] libmachine: (ha-436168-m02) Calling .GetState
	I0408 18:48:32.751511  635950 status.go:330] ha-436168-m02 host status = "Stopped" (err=<nil>)
	I0408 18:48:32.751536  635950 status.go:343] host is not running, skipping remaining checks
	I0408 18:48:32.751546  635950 status.go:257] ha-436168-m02 status: &{Name:ha-436168-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 18:48:32.751567  635950 status.go:255] checking status of ha-436168-m04 ...
	I0408 18:48:32.751849  635950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:48:32.751894  635950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:48:32.766550  635950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46781
	I0408 18:48:32.766974  635950 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:48:32.767423  635950 main.go:141] libmachine: Using API Version  1
	I0408 18:48:32.767439  635950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:48:32.767741  635950 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:48:32.767913  635950 main.go:141] libmachine: (ha-436168-m04) Calling .GetState
	I0408 18:48:32.769426  635950 status.go:330] ha-436168-m04 host status = "Stopped" (err=<nil>)
	I0408 18:48:32.769443  635950 status.go:343] host is not running, skipping remaining checks
	I0408 18:48:32.769451  635950 status.go:257] ha-436168-m04 status: &{Name:ha-436168-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (275.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (155.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-436168 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0408 18:50:18.529996  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-436168 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m35.101631816s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (155.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-436168 --control-plane -v=7 --alsologtostderr
E0408 18:51:43.949209  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/functional-819351/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-436168 --control-plane -v=7 --alsologtostderr: (1m14.657425663s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-436168 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.56s)

                                                
                                    
x
+
TestJSONOutput/start/Command (61.48s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-709289 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-709289 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m1.481474301s)
--- PASS: TestJSONOutput/start/Command (61.48s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-709289 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-709289 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.37s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-709289 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-709289 --output=json --user=testUser: (7.369208957s)
--- PASS: TestJSONOutput/stop/Command (7.37s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-981158 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-981158 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (80.063258ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"410c04b2-18db-46ae-ab89-6d5ed0640327","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-981158] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"23d66998-5658-4e50-9134-2853f503ec9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18585"}}
	{"specversion":"1.0","id":"5f0aea8a-03d8-4f88-b8b5-41cb8a633983","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c2cbf585-f2cf-457a-81e5-ba32c435e2ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18585-610499/kubeconfig"}}
	{"specversion":"1.0","id":"9fdbda36-c5ae-4cda-90e5-abb2d56fde2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18585-610499/.minikube"}}
	{"specversion":"1.0","id":"8af99180-4763-4b61-84eb-ec7f8f182c24","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"427de16d-3c3f-42b2-8e08-d6e9299b276c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e364b8cc-41c7-4c95-910e-0da4dfee503f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-981158" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-981158
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (95.44s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-648100 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-648100 --driver=kvm2  --container-runtime=containerd: (46.197435135s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-651268 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-651268 --driver=kvm2  --container-runtime=containerd: (46.528471222s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-648100
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-651268
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-651268" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-651268
helpers_test.go:175: Cleaning up "first-648100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-648100
--- PASS: TestMinikubeProfile (95.44s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.22s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-461176 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0408 18:55:18.530405  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-461176 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (27.224441415s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-461176 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-461176 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.92s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-480663 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-480663 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (28.917550206s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-480663 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-480663 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.87s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-461176 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-480663 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-480663 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.41s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-480663
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-480663: (1.408019664s)
--- PASS: TestMountStart/serial/Stop (1.41s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.69s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-480663
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-480663: (21.690516444s)
--- PASS: TestMountStart/serial/RestartStopped (22.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-480663 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-480663 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (104.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-910363 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0408 18:56:43.948375  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/functional-819351/client.crt: no such file or directory
E0408 18:58:21.578157  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-910363 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m43.889616663s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (104.33s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-910363 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-910363 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-910363 -- rollout status deployment/busybox: (2.78345882s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-910363 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-910363 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-910363 -- exec busybox-7fdf7869d9-jl4j5 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-910363 -- exec busybox-7fdf7869d9-phjmk -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-910363 -- exec busybox-7fdf7869d9-jl4j5 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-910363 -- exec busybox-7fdf7869d9-phjmk -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-910363 -- exec busybox-7fdf7869d9-jl4j5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-910363 -- exec busybox-7fdf7869d9-phjmk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.56s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-910363 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-910363 -- exec busybox-7fdf7869d9-jl4j5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-910363 -- exec busybox-7fdf7869d9-jl4j5 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-910363 -- exec busybox-7fdf7869d9-phjmk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-910363 -- exec busybox-7fdf7869d9-phjmk -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.90s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-910363 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-910363 -v 3 --alsologtostderr: (40.949376197s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.55s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-910363 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 cp testdata/cp-test.txt multinode-910363:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 ssh -n multinode-910363 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 cp multinode-910363:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1499324117/001/cp-test_multinode-910363.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 ssh -n multinode-910363 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 cp multinode-910363:/home/docker/cp-test.txt multinode-910363-m02:/home/docker/cp-test_multinode-910363_multinode-910363-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 ssh -n multinode-910363 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 ssh -n multinode-910363-m02 "sudo cat /home/docker/cp-test_multinode-910363_multinode-910363-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 cp multinode-910363:/home/docker/cp-test.txt multinode-910363-m03:/home/docker/cp-test_multinode-910363_multinode-910363-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 ssh -n multinode-910363 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 ssh -n multinode-910363-m03 "sudo cat /home/docker/cp-test_multinode-910363_multinode-910363-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 cp testdata/cp-test.txt multinode-910363-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 ssh -n multinode-910363-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 cp multinode-910363-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1499324117/001/cp-test_multinode-910363-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 ssh -n multinode-910363-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 cp multinode-910363-m02:/home/docker/cp-test.txt multinode-910363:/home/docker/cp-test_multinode-910363-m02_multinode-910363.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 ssh -n multinode-910363-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 ssh -n multinode-910363 "sudo cat /home/docker/cp-test_multinode-910363-m02_multinode-910363.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 cp multinode-910363-m02:/home/docker/cp-test.txt multinode-910363-m03:/home/docker/cp-test_multinode-910363-m02_multinode-910363-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 ssh -n multinode-910363-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 ssh -n multinode-910363-m03 "sudo cat /home/docker/cp-test_multinode-910363-m02_multinode-910363-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 cp testdata/cp-test.txt multinode-910363-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 ssh -n multinode-910363-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 cp multinode-910363-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1499324117/001/cp-test_multinode-910363-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 ssh -n multinode-910363-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 cp multinode-910363-m03:/home/docker/cp-test.txt multinode-910363:/home/docker/cp-test_multinode-910363-m03_multinode-910363.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 ssh -n multinode-910363-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 ssh -n multinode-910363 "sudo cat /home/docker/cp-test_multinode-910363-m03_multinode-910363.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 cp multinode-910363-m03:/home/docker/cp-test.txt multinode-910363-m02:/home/docker/cp-test_multinode-910363-m03_multinode-910363-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 ssh -n multinode-910363-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 ssh -n multinode-910363-m02 "sudo cat /home/docker/cp-test_multinode-910363-m03_multinode-910363-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.70s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-910363 node stop m03: (1.533210735s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-910363 status: exit status 7 (448.100159ms)

                                                
                                                
-- stdout --
	multinode-910363
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-910363-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-910363-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-910363 status --alsologtostderr: exit status 7 (449.868163ms)

                                                
                                                
-- stdout --
	multinode-910363
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-910363-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-910363-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 18:59:22.005535  643422 out.go:291] Setting OutFile to fd 1 ...
	I0408 18:59:22.005667  643422 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 18:59:22.005678  643422 out.go:304] Setting ErrFile to fd 2...
	I0408 18:59:22.005683  643422 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 18:59:22.006315  643422 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18585-610499/.minikube/bin
	I0408 18:59:22.006693  643422 out.go:298] Setting JSON to false
	I0408 18:59:22.006750  643422 mustload.go:65] Loading cluster: multinode-910363
	I0408 18:59:22.007171  643422 notify.go:220] Checking for updates...
	I0408 18:59:22.007667  643422 config.go:182] Loaded profile config "multinode-910363": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 18:59:22.007691  643422 status.go:255] checking status of multinode-910363 ...
	I0408 18:59:22.008215  643422 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:59:22.008305  643422 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:59:22.024592  643422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43279
	I0408 18:59:22.025054  643422 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:59:22.025636  643422 main.go:141] libmachine: Using API Version  1
	I0408 18:59:22.025660  643422 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:59:22.026168  643422 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:59:22.026402  643422 main.go:141] libmachine: (multinode-910363) Calling .GetState
	I0408 18:59:22.028020  643422 status.go:330] multinode-910363 host status = "Running" (err=<nil>)
	I0408 18:59:22.028045  643422 host.go:66] Checking if "multinode-910363" exists ...
	I0408 18:59:22.028356  643422 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:59:22.028393  643422 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:59:22.043942  643422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37543
	I0408 18:59:22.044423  643422 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:59:22.044940  643422 main.go:141] libmachine: Using API Version  1
	I0408 18:59:22.044967  643422 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:59:22.045313  643422 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:59:22.045503  643422 main.go:141] libmachine: (multinode-910363) Calling .GetIP
	I0408 18:59:22.048355  643422 main.go:141] libmachine: (multinode-910363) DBG | domain multinode-910363 has defined MAC address 52:54:00:c7:81:df in network mk-multinode-910363
	I0408 18:59:22.048851  643422 main.go:141] libmachine: (multinode-910363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:81:df", ip: ""} in network mk-multinode-910363: {Iface:virbr1 ExpiryTime:2024-04-08 19:56:56 +0000 UTC Type:0 Mac:52:54:00:c7:81:df Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:multinode-910363 Clientid:01:52:54:00:c7:81:df}
	I0408 18:59:22.048891  643422 main.go:141] libmachine: (multinode-910363) DBG | domain multinode-910363 has defined IP address 192.168.39.29 and MAC address 52:54:00:c7:81:df in network mk-multinode-910363
	I0408 18:59:22.049050  643422 host.go:66] Checking if "multinode-910363" exists ...
	I0408 18:59:22.049357  643422 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:59:22.049403  643422 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:59:22.065640  643422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39057
	I0408 18:59:22.066059  643422 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:59:22.066601  643422 main.go:141] libmachine: Using API Version  1
	I0408 18:59:22.066621  643422 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:59:22.066946  643422 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:59:22.067212  643422 main.go:141] libmachine: (multinode-910363) Calling .DriverName
	I0408 18:59:22.067393  643422 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 18:59:22.067421  643422 main.go:141] libmachine: (multinode-910363) Calling .GetSSHHostname
	I0408 18:59:22.070460  643422 main.go:141] libmachine: (multinode-910363) DBG | domain multinode-910363 has defined MAC address 52:54:00:c7:81:df in network mk-multinode-910363
	I0408 18:59:22.070828  643422 main.go:141] libmachine: (multinode-910363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:81:df", ip: ""} in network mk-multinode-910363: {Iface:virbr1 ExpiryTime:2024-04-08 19:56:56 +0000 UTC Type:0 Mac:52:54:00:c7:81:df Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:multinode-910363 Clientid:01:52:54:00:c7:81:df}
	I0408 18:59:22.070864  643422 main.go:141] libmachine: (multinode-910363) DBG | domain multinode-910363 has defined IP address 192.168.39.29 and MAC address 52:54:00:c7:81:df in network mk-multinode-910363
	I0408 18:59:22.071032  643422 main.go:141] libmachine: (multinode-910363) Calling .GetSSHPort
	I0408 18:59:22.071212  643422 main.go:141] libmachine: (multinode-910363) Calling .GetSSHKeyPath
	I0408 18:59:22.071386  643422 main.go:141] libmachine: (multinode-910363) Calling .GetSSHUsername
	I0408 18:59:22.071568  643422 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18585-610499/.minikube/machines/multinode-910363/id_rsa Username:docker}
	I0408 18:59:22.155721  643422 ssh_runner.go:195] Run: systemctl --version
	I0408 18:59:22.163033  643422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 18:59:22.180221  643422 kubeconfig.go:125] found "multinode-910363" server: "https://192.168.39.29:8443"
	I0408 18:59:22.180267  643422 api_server.go:166] Checking apiserver status ...
	I0408 18:59:22.180308  643422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 18:59:22.195673  643422 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup
	W0408 18:59:22.207587  643422 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0408 18:59:22.207658  643422 ssh_runner.go:195] Run: ls
	I0408 18:59:22.213472  643422 api_server.go:253] Checking apiserver healthz at https://192.168.39.29:8443/healthz ...
	I0408 18:59:22.217685  643422 api_server.go:279] https://192.168.39.29:8443/healthz returned 200:
	ok
	I0408 18:59:22.217713  643422 status.go:422] multinode-910363 apiserver status = Running (err=<nil>)
	I0408 18:59:22.217727  643422 status.go:257] multinode-910363 status: &{Name:multinode-910363 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 18:59:22.217748  643422 status.go:255] checking status of multinode-910363-m02 ...
	I0408 18:59:22.218039  643422 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:59:22.218076  643422 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:59:22.233326  643422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38863
	I0408 18:59:22.233883  643422 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:59:22.234382  643422 main.go:141] libmachine: Using API Version  1
	I0408 18:59:22.234404  643422 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:59:22.234807  643422 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:59:22.235038  643422 main.go:141] libmachine: (multinode-910363-m02) Calling .GetState
	I0408 18:59:22.236560  643422 status.go:330] multinode-910363-m02 host status = "Running" (err=<nil>)
	I0408 18:59:22.236592  643422 host.go:66] Checking if "multinode-910363-m02" exists ...
	I0408 18:59:22.236865  643422 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:59:22.236903  643422 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:59:22.253314  643422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41577
	I0408 18:59:22.253773  643422 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:59:22.254236  643422 main.go:141] libmachine: Using API Version  1
	I0408 18:59:22.254252  643422 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:59:22.254551  643422 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:59:22.254730  643422 main.go:141] libmachine: (multinode-910363-m02) Calling .GetIP
	I0408 18:59:22.257375  643422 main.go:141] libmachine: (multinode-910363-m02) DBG | domain multinode-910363-m02 has defined MAC address 52:54:00:5d:3a:89 in network mk-multinode-910363
	I0408 18:59:22.257856  643422 main.go:141] libmachine: (multinode-910363-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:3a:89", ip: ""} in network mk-multinode-910363: {Iface:virbr1 ExpiryTime:2024-04-08 19:58:00 +0000 UTC Type:0 Mac:52:54:00:5d:3a:89 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:multinode-910363-m02 Clientid:01:52:54:00:5d:3a:89}
	I0408 18:59:22.257887  643422 main.go:141] libmachine: (multinode-910363-m02) DBG | domain multinode-910363-m02 has defined IP address 192.168.39.4 and MAC address 52:54:00:5d:3a:89 in network mk-multinode-910363
	I0408 18:59:22.257992  643422 host.go:66] Checking if "multinode-910363-m02" exists ...
	I0408 18:59:22.258367  643422 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:59:22.258412  643422 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:59:22.273630  643422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44717
	I0408 18:59:22.274152  643422 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:59:22.274663  643422 main.go:141] libmachine: Using API Version  1
	I0408 18:59:22.274689  643422 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:59:22.275022  643422 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:59:22.275220  643422 main.go:141] libmachine: (multinode-910363-m02) Calling .DriverName
	I0408 18:59:22.275447  643422 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 18:59:22.275468  643422 main.go:141] libmachine: (multinode-910363-m02) Calling .GetSSHHostname
	I0408 18:59:22.277959  643422 main.go:141] libmachine: (multinode-910363-m02) DBG | domain multinode-910363-m02 has defined MAC address 52:54:00:5d:3a:89 in network mk-multinode-910363
	I0408 18:59:22.278328  643422 main.go:141] libmachine: (multinode-910363-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:3a:89", ip: ""} in network mk-multinode-910363: {Iface:virbr1 ExpiryTime:2024-04-08 19:58:00 +0000 UTC Type:0 Mac:52:54:00:5d:3a:89 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:multinode-910363-m02 Clientid:01:52:54:00:5d:3a:89}
	I0408 18:59:22.278342  643422 main.go:141] libmachine: (multinode-910363-m02) DBG | domain multinode-910363-m02 has defined IP address 192.168.39.4 and MAC address 52:54:00:5d:3a:89 in network mk-multinode-910363
	I0408 18:59:22.278481  643422 main.go:141] libmachine: (multinode-910363-m02) Calling .GetSSHPort
	I0408 18:59:22.278655  643422 main.go:141] libmachine: (multinode-910363-m02) Calling .GetSSHKeyPath
	I0408 18:59:22.278815  643422 main.go:141] libmachine: (multinode-910363-m02) Calling .GetSSHUsername
	I0408 18:59:22.278931  643422 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18585-610499/.minikube/machines/multinode-910363-m02/id_rsa Username:docker}
	I0408 18:59:22.359981  643422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 18:59:22.377895  643422 status.go:257] multinode-910363-m02 status: &{Name:multinode-910363-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0408 18:59:22.377960  643422 status.go:255] checking status of multinode-910363-m03 ...
	I0408 18:59:22.378312  643422 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 18:59:22.378355  643422 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:59:22.394684  643422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41459
	I0408 18:59:22.395236  643422 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:59:22.395789  643422 main.go:141] libmachine: Using API Version  1
	I0408 18:59:22.395816  643422 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:59:22.396217  643422 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:59:22.396424  643422 main.go:141] libmachine: (multinode-910363-m03) Calling .GetState
	I0408 18:59:22.398043  643422 status.go:330] multinode-910363-m03 host status = "Stopped" (err=<nil>)
	I0408 18:59:22.398061  643422 status.go:343] host is not running, skipping remaining checks
	I0408 18:59:22.398069  643422 status.go:257] multinode-910363-m03 status: &{Name:multinode-910363-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.43s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (26.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-910363 node start m03 -v=7 --alsologtostderr: (26.238703312s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (26.89s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (296.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-910363
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-910363
E0408 19:00:18.530372  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.crt: no such file or directory
E0408 19:01:43.950532  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/functional-819351/client.crt: no such file or directory
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-910363: (3m5.524285002s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-910363 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-910363 --wait=true -v=8 --alsologtostderr: (1m50.424381533s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-910363
--- PASS: TestMultiNode/serial/RestartKeepsNodes (296.07s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 node delete m03
E0408 19:04:46.992528  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/functional-819351/client.crt: no such file or directory
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-910363 node delete m03: (1.680997639s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (184.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 stop
E0408 19:05:18.529802  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.crt: no such file or directory
E0408 19:06:43.950498  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/functional-819351/client.crt: no such file or directory
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-910363 stop: (3m4.030188744s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-910363 status: exit status 7 (92.887ms)

                                                
                                                
-- stdout --
	multinode-910363
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-910363-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-910363 status --alsologtostderr: exit status 7 (93.306099ms)

                                                
                                                
-- stdout --
	multinode-910363
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-910363-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 19:07:51.783497  646019 out.go:291] Setting OutFile to fd 1 ...
	I0408 19:07:51.783680  646019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 19:07:51.783690  646019 out.go:304] Setting ErrFile to fd 2...
	I0408 19:07:51.783697  646019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 19:07:51.783896  646019 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18585-610499/.minikube/bin
	I0408 19:07:51.784086  646019 out.go:298] Setting JSON to false
	I0408 19:07:51.784116  646019 mustload.go:65] Loading cluster: multinode-910363
	I0408 19:07:51.784234  646019 notify.go:220] Checking for updates...
	I0408 19:07:51.784507  646019 config.go:182] Loaded profile config "multinode-910363": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 19:07:51.784531  646019 status.go:255] checking status of multinode-910363 ...
	I0408 19:07:51.784921  646019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 19:07:51.785000  646019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:07:51.803086  646019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35455
	I0408 19:07:51.803551  646019 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:07:51.804168  646019 main.go:141] libmachine: Using API Version  1
	I0408 19:07:51.804209  646019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:07:51.804528  646019 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:07:51.804744  646019 main.go:141] libmachine: (multinode-910363) Calling .GetState
	I0408 19:07:51.806166  646019 status.go:330] multinode-910363 host status = "Stopped" (err=<nil>)
	I0408 19:07:51.806179  646019 status.go:343] host is not running, skipping remaining checks
	I0408 19:07:51.806186  646019 status.go:257] multinode-910363 status: &{Name:multinode-910363 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 19:07:51.806208  646019 status.go:255] checking status of multinode-910363-m02 ...
	I0408 19:07:51.806490  646019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0408 19:07:51.806539  646019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:07:51.820679  646019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45311
	I0408 19:07:51.821078  646019 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:07:51.821523  646019 main.go:141] libmachine: Using API Version  1
	I0408 19:07:51.821542  646019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:07:51.821846  646019 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:07:51.822033  646019 main.go:141] libmachine: (multinode-910363-m02) Calling .GetState
	I0408 19:07:51.823517  646019 status.go:330] multinode-910363-m02 host status = "Stopped" (err=<nil>)
	I0408 19:07:51.823552  646019 status.go:343] host is not running, skipping remaining checks
	I0408 19:07:51.823559  646019 status.go:257] multinode-910363-m02 status: &{Name:multinode-910363-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (184.22s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-910363 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-910363 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m20.430553096s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910363 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (81.00s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (47.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-910363
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-910363-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-910363-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (77.504436ms)

                                                
                                                
-- stdout --
	* [multinode-910363-m02] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18585-610499/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18585-610499/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-910363-m02' is duplicated with machine name 'multinode-910363-m02' in profile 'multinode-910363'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-910363-m03 --driver=kvm2  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-910363-m03 --driver=kvm2  --container-runtime=containerd: (46.048239193s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-910363
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-910363: exit status 80 (236.908211ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-910363 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-910363-m03 already exists in multinode-910363-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-910363-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (47.42s)

                                                
                                    
x
+
TestPreload (229.98s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-989995 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0408 19:10:18.529838  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-989995 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m25.670207232s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-989995 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-989995
E0408 19:11:43.948477  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/functional-819351/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-989995: (1m32.44426061s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-989995 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-989995 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (49.739272638s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-989995 image list
helpers_test.go:175: Cleaning up "test-preload-989995" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-989995
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-989995: (1.075566572s)
--- PASS: TestPreload (229.98s)

                                                
                                    
x
+
TestScheduledStopUnix (118.99s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-718259 --memory=2048 --driver=kvm2  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-718259 --memory=2048 --driver=kvm2  --container-runtime=containerd: (47.214835137s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-718259 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-718259 -n scheduled-stop-718259
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-718259 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-718259 --cancel-scheduled
E0408 19:15:01.578525  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-718259 -n scheduled-stop-718259
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-718259
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-718259 --schedule 15s
E0408 19:15:18.529380  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-718259
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-718259: exit status 7 (77.08284ms)

                                                
                                                
-- stdout --
	scheduled-stop-718259
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-718259 -n scheduled-stop-718259
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-718259 -n scheduled-stop-718259: exit status 7 (75.976837ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-718259" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-718259
--- PASS: TestScheduledStopUnix (118.99s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (202.84s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2032147761 start -p running-upgrade-119620 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2032147761 start -p running-upgrade-119620 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m52.207398458s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-119620 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-119620 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m28.894434943s)
helpers_test.go:175: Cleaning up "running-upgrade-119620" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-119620
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-119620: (1.187327803s)
--- PASS: TestRunningBinaryUpgrade (202.84s)

                                                
                                    
x
+
TestKubernetesUpgrade (179.18s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-770864 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-770864 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m4.427591316s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-770864
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-770864: (2.385191483s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-770864 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-770864 status --format={{.Host}}: exit status 7 (126.590538ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-770864 --memory=2200 --kubernetes-version=v1.30.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-770864 --memory=2200 --kubernetes-version=v1.30.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m24.103639235s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-770864 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-770864 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-770864 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (106.993412ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-770864] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18585-610499/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18585-610499/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0-rc.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-770864
	    minikube start -p kubernetes-upgrade-770864 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7708642 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0-rc.1, by running:
	    
	    minikube start -p kubernetes-upgrade-770864 --kubernetes-version=v1.30.0-rc.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-770864 --memory=2200 --kubernetes-version=v1.30.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-770864 --memory=2200 --kubernetes-version=v1.30.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (26.652746792s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-770864" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-770864
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-770864: (1.314632643s)
--- PASS: TestKubernetesUpgrade (179.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-993247 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-993247 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (98.73093ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-993247] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18585-610499/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18585-610499/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (97.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-993247 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-993247 --driver=kvm2  --container-runtime=containerd: (1m37.144965664s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-993247 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (97.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-827074 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-827074 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (126.981304ms)

                                                
                                                
-- stdout --
	* [false-827074] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18585-610499/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18585-610499/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 19:15:54.396984  650362 out.go:291] Setting OutFile to fd 1 ...
	I0408 19:15:54.397328  650362 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 19:15:54.397343  650362 out.go:304] Setting ErrFile to fd 2...
	I0408 19:15:54.397349  650362 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 19:15:54.397657  650362 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18585-610499/.minikube/bin
	I0408 19:15:54.398417  650362 out.go:298] Setting JSON to false
	I0408 19:15:54.399735  650362 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10705,"bootTime":1712593049,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 19:15:54.399821  650362 start.go:139] virtualization: kvm guest
	I0408 19:15:54.402186  650362 out.go:177] * [false-827074] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 19:15:54.403554  650362 out.go:177]   - MINIKUBE_LOCATION=18585
	I0408 19:15:54.405011  650362 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 19:15:54.403595  650362 notify.go:220] Checking for updates...
	I0408 19:15:54.407916  650362 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18585-610499/kubeconfig
	I0408 19:15:54.409277  650362 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18585-610499/.minikube
	I0408 19:15:54.410566  650362 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 19:15:54.411863  650362 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 19:15:54.413582  650362 config.go:182] Loaded profile config "NoKubernetes-993247": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 19:15:54.413706  650362 config.go:182] Loaded profile config "force-systemd-env-038234": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 19:15:54.413807  650362 config.go:182] Loaded profile config "offline-containerd-979064": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0408 19:15:54.413903  650362 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 19:15:54.451591  650362 out.go:177] * Using the kvm2 driver based on user configuration
	I0408 19:15:54.452875  650362 start.go:297] selected driver: kvm2
	I0408 19:15:54.452894  650362 start.go:901] validating driver "kvm2" against <nil>
	I0408 19:15:54.452905  650362 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 19:15:54.454849  650362 out.go:177] 
	W0408 19:15:54.456062  650362 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0408 19:15:54.457308  650362 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-827074 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-827074

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-827074

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-827074

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-827074

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-827074

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-827074

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-827074

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-827074

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-827074

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-827074

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827074"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827074"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827074"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-827074

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827074"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827074"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-827074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-827074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-827074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-827074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-827074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-827074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-827074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-827074" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827074"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827074"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827074"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827074"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827074"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-827074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-827074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-827074" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827074"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827074"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827074"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827074"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827074"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-827074

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827074"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827074"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827074"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827074"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827074"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827074"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827074"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827074"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827074"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827074"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827074"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827074"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827074"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827074"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827074"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827074"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827074"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827074"

                                                
                                                
----------------------- debugLogs end: false-827074 [took: 3.199980956s] --------------------------------
helpers_test.go:175: Cleaning up "false-827074" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-827074
--- PASS: TestNetworkPlugins/group/false (3.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (75.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-993247 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-993247 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (1m14.062134984s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-993247 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-993247 status -o json: exit status 2 (250.80101ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-993247","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-993247
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-993247: (1.014135405s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (75.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (36.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-993247 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-993247 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (36.970376981s)
--- PASS: TestNoKubernetes/serial/Start (36.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-993247 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-993247 "sudo systemctl is-active --quiet service kubelet": exit status 1 (240.743511ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-993247
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-993247: (1.629316001s)
--- PASS: TestNoKubernetes/serial/Stop (1.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (23.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-993247 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-993247 --driver=kvm2  --container-runtime=containerd: (23.785035795s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (23.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-993247 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-993247 "sudo systemctl is-active --quiet service kubelet": exit status 1 (216.995179ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.57s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.57s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (144.86s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.4220803715 start -p stopped-upgrade-045885 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E0408 19:20:18.529319  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.4220803715 start -p stopped-upgrade-045885 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m21.640359959s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.4220803715 -p stopped-upgrade-045885 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.4220803715 -p stopped-upgrade-045885 stop: (2.15924841s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-045885 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-045885 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m1.056825041s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (144.86s)

                                                
                                    
x
+
TestPause/serial/Start (82.57s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-596964 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-596964 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m22.569479727s)
--- PASS: TestPause/serial/Start (82.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (106.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-827074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-827074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (1m46.237876122s)
--- PASS: TestNetworkPlugins/group/auto/Start (106.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (127.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-827074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
E0408 19:21:26.993537  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/functional-819351/client.crt: no such file or directory
E0408 19:21:43.949106  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/functional-819351/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-827074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (2m7.857685764s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (127.86s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-045885
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-045885: (1.076838449s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (130.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-827074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-827074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (2m10.465471258s)
--- PASS: TestNetworkPlugins/group/calico/Start (130.47s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (85.88s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-596964 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-596964 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m25.856225511s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (85.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-827074 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-827074 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-pnhbh" [a9c3023f-c8c0-4fcc-813d-9b16bd1f8bc5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-pnhbh" [a9c3023f-c8c0-4fcc-813d-9b16bd1f8bc5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003813234s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-827074 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-827074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-827074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-6kvrt" [1924823f-3a95-440f-a247-e2e24a69d0a6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006850982s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (96.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-827074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-827074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m36.625378933s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (96.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-827074 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-827074 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-99qsv" [9cfe235a-4a47-4db2-98fe-8324662f1c5d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-99qsv" [9cfe235a-4a47-4db2-98fe-8324662f1c5d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005149109s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-827074 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-827074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-827074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestPause/serial/Pause (0.93s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-596964 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.93s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.82s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-596964 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-596964 --output=json --layout=cluster: exit status 2 (819.406477ms)

                                                
                                                
-- stdout --
	{"Name":"pause-596964","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-596964","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (70.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-827074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-827074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m10.655332592s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (70.66s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.91s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-596964 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.91s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.16s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-596964 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-596964 --alsologtostderr -v=5: (1.160871661s)
--- PASS: TestPause/serial/PauseAgain (1.16s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.1s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-596964 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-596964 --alsologtostderr -v=5: (1.101843767s)
--- PASS: TestPause/serial/DeletePaused (1.10s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (116.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-827074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-827074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m56.288524771s)
--- PASS: TestNetworkPlugins/group/flannel/Start (116.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-ksgm5" [b1d25598-fa27-4c8a-b4e4-632a4ccc577b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005368896s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-827074 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-827074 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jp99f" [5b590fb4-92e0-4513-8c49-6ed12304a5a7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-jp99f" [5b590fb4-92e0-4513-8c49-6ed12304a5a7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.009267214s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-827074 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-827074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (1.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-827074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:264: (dbg) Done: kubectl --context calico-827074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": (1.58274001s)
--- PASS: TestNetworkPlugins/group/calico/HairPin (1.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (77.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-827074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-827074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m17.856990646s)
--- PASS: TestNetworkPlugins/group/bridge/Start (77.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-827074 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-827074 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-d84l6" [a9df0ae3-191f-497c-9267-b584e97cfcb9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0408 19:25:18.529145  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-d84l6" [a9df0ae3-191f-497c-9267-b584e97cfcb9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.005096062s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-827074 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-827074 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-b595t" [8904e979-88a8-4089-b1a4-3e88c83f00e4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-b595t" [8904e979-88a8-4089-b1a4-3e88c83f00e4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.006672082s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-827074 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-827074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-827074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-827074 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-827074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-827074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (183.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-763627 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-763627 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (3m3.785067062s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (183.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (132.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-113058 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-113058 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.1: (2m12.578382542s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (132.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-jw9vc" [730c5efb-0a75-485d-8497-3878e4859ff2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005135807s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-827074 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-827074 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6wmxm" [666a0297-dfad-4b9b-9add-9119170a55aa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-6wmxm" [666a0297-dfad-4b9b-9add-9119170a55aa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.005435381s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-827074 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-827074 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context bridge-827074 replace --force -f testdata/netcat-deployment.yaml: (1.628186426s)
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4f8m5" [7eb070b9-0b43-432d-9efa-87679961c0e6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4f8m5" [7eb070b9-0b43-432d-9efa-87679961c0e6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.005400162s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-827074 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-827074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-827074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-827074 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-827074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-827074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (76.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-743523 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-743523 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3: (1m16.490237506s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (76.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (89.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-652174 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-652174 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3: (1m29.823103444s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (89.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-743523 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4b9b192a-0e58-47ff-87d6-966f8c1136c7] Pending
helpers_test.go:344: "busybox" [4b9b192a-0e58-47ff-87d6-966f8c1136c7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4b9b192a-0e58-47ff-87d6-966f8c1136c7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.005045976s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-743523 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-113058 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [88de4bd9-762e-496e-901f-22348738833a] Pending
helpers_test.go:344: "busybox" [88de4bd9-762e-496e-901f-22348738833a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [88de4bd9-762e-496e-901f-22348738833a] Running
E0408 19:28:07.776835  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/auto-827074/client.crt: no such file or directory
E0408 19:28:07.782150  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/auto-827074/client.crt: no such file or directory
E0408 19:28:07.792493  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/auto-827074/client.crt: no such file or directory
E0408 19:28:07.812936  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/auto-827074/client.crt: no such file or directory
E0408 19:28:07.853399  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/auto-827074/client.crt: no such file or directory
E0408 19:28:07.933835  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/auto-827074/client.crt: no such file or directory
E0408 19:28:08.094307  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/auto-827074/client.crt: no such file or directory
E0408 19:28:08.415428  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/auto-827074/client.crt: no such file or directory
E0408 19:28:09.056344  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/auto-827074/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.00562306s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-113058 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-743523 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0408 19:28:10.337288  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/auto-827074/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-743523 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.164997972s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-743523 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (92.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-743523 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-743523 --alsologtostderr -v=3: (1m32.586033807s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (92.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-113058 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-113058 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.233739034s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-113058 describe deploy/metrics-server -n kube-system
E0408 19:28:12.898165  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/auto-827074/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (92.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-113058 --alsologtostderr -v=3
E0408 19:28:18.018386  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/auto-827074/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-113058 --alsologtostderr -v=3: (1m32.561824541s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (92.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-652174 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ea88a39e-30a7-4875-918d-0fd65cf506d3] Pending
helpers_test.go:344: "busybox" [ea88a39e-30a7-4875-918d-0fd65cf506d3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ea88a39e-30a7-4875-918d-0fd65cf506d3] Running
E0408 19:28:28.258937  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/auto-827074/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.006424454s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-652174 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-652174 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0408 19:28:34.726487  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/kindnet-827074/client.crt: no such file or directory
E0408 19:28:34.731864  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/kindnet-827074/client.crt: no such file or directory
E0408 19:28:34.742208  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/kindnet-827074/client.crt: no such file or directory
E0408 19:28:34.762631  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/kindnet-827074/client.crt: no such file or directory
E0408 19:28:34.803073  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/kindnet-827074/client.crt: no such file or directory
E0408 19:28:34.883502  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/kindnet-827074/client.crt: no such file or directory
E0408 19:28:35.044344  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/kindnet-827074/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-652174 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.040986807s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-652174 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (92.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-652174 --alsologtostderr -v=3
E0408 19:28:35.365508  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/kindnet-827074/client.crt: no such file or directory
E0408 19:28:36.006080  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/kindnet-827074/client.crt: no such file or directory
E0408 19:28:37.287096  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/kindnet-827074/client.crt: no such file or directory
E0408 19:28:39.848302  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/kindnet-827074/client.crt: no such file or directory
E0408 19:28:44.969344  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/kindnet-827074/client.crt: no such file or directory
E0408 19:28:48.739699  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/auto-827074/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-652174 --alsologtostderr -v=3: (1m32.529350221s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (92.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-763627 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4a81fdc6-a760-4789-b29f-4839371961b7] Pending
helpers_test.go:344: "busybox" [4a81fdc6-a760-4789-b29f-4839371961b7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4a81fdc6-a760-4789-b29f-4839371961b7] Running
E0408 19:28:55.209624  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/kindnet-827074/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.005695631s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-763627 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-763627 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-763627 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.001569232s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-763627 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (92.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-763627 --alsologtostderr -v=3
E0408 19:29:15.690419  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/kindnet-827074/client.crt: no such file or directory
E0408 19:29:27.353507  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/calico-827074/client.crt: no such file or directory
E0408 19:29:27.358879  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/calico-827074/client.crt: no such file or directory
E0408 19:29:27.369225  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/calico-827074/client.crt: no such file or directory
E0408 19:29:27.389657  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/calico-827074/client.crt: no such file or directory
E0408 19:29:27.430132  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/calico-827074/client.crt: no such file or directory
E0408 19:29:27.511111  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/calico-827074/client.crt: no such file or directory
E0408 19:29:27.671825  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/calico-827074/client.crt: no such file or directory
E0408 19:29:27.992897  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/calico-827074/client.crt: no such file or directory
E0408 19:29:28.633668  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/calico-827074/client.crt: no such file or directory
E0408 19:29:29.700907  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/auto-827074/client.crt: no such file or directory
E0408 19:29:29.914507  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/calico-827074/client.crt: no such file or directory
E0408 19:29:32.475467  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/calico-827074/client.crt: no such file or directory
E0408 19:29:37.596297  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/calico-827074/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-763627 --alsologtostderr -v=3: (1m32.492642288s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (92.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-743523 -n embed-certs-743523
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-743523 -n embed-certs-743523: exit status 7 (79.653348ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-743523 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (324.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-743523 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-743523 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3: (5m24.215360587s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-743523 -n embed-certs-743523
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (324.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-113058 -n no-preload-113058
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-113058 -n no-preload-113058: exit status 7 (86.132495ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-113058 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (338.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-113058 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.1
E0408 19:29:47.837267  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/calico-827074/client.crt: no such file or directory
E0408 19:29:56.651279  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/kindnet-827074/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-113058 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.1: (5m37.833096187s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-113058 -n no-preload-113058
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (338.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-652174 -n default-k8s-diff-port-652174
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-652174 -n default-k8s-diff-port-652174: exit status 7 (103.580836ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-652174 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (341.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-652174 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3
E0408 19:30:08.317705  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/calico-827074/client.crt: no such file or directory
E0408 19:30:14.941857  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/custom-flannel-827074/client.crt: no such file or directory
E0408 19:30:14.947186  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/custom-flannel-827074/client.crt: no such file or directory
E0408 19:30:14.957631  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/custom-flannel-827074/client.crt: no such file or directory
E0408 19:30:14.977984  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/custom-flannel-827074/client.crt: no such file or directory
E0408 19:30:15.018374  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/custom-flannel-827074/client.crt: no such file or directory
E0408 19:30:15.099256  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/custom-flannel-827074/client.crt: no such file or directory
E0408 19:30:15.259679  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/custom-flannel-827074/client.crt: no such file or directory
E0408 19:30:15.580158  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/custom-flannel-827074/client.crt: no such file or directory
E0408 19:30:16.221122  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/custom-flannel-827074/client.crt: no such file or directory
E0408 19:30:17.501364  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/custom-flannel-827074/client.crt: no such file or directory
E0408 19:30:18.530047  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.crt: no such file or directory
E0408 19:30:20.062176  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/custom-flannel-827074/client.crt: no such file or directory
E0408 19:30:20.336649  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/enable-default-cni-827074/client.crt: no such file or directory
E0408 19:30:20.341940  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/enable-default-cni-827074/client.crt: no such file or directory
E0408 19:30:20.352277  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/enable-default-cni-827074/client.crt: no such file or directory
E0408 19:30:20.372608  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/enable-default-cni-827074/client.crt: no such file or directory
E0408 19:30:20.412920  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/enable-default-cni-827074/client.crt: no such file or directory
E0408 19:30:20.493711  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/enable-default-cni-827074/client.crt: no such file or directory
E0408 19:30:20.654179  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/enable-default-cni-827074/client.crt: no such file or directory
E0408 19:30:20.974841  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/enable-default-cni-827074/client.crt: no such file or directory
E0408 19:30:21.615131  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/enable-default-cni-827074/client.crt: no such file or directory
E0408 19:30:22.896117  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/enable-default-cni-827074/client.crt: no such file or directory
E0408 19:30:25.183234  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/custom-flannel-827074/client.crt: no such file or directory
E0408 19:30:25.456931  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/enable-default-cni-827074/client.crt: no such file or directory
E0408 19:30:30.578172  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/enable-default-cni-827074/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-652174 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3: (5m41.491391264s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-652174 -n default-k8s-diff-port-652174
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (341.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-763627 -n old-k8s-version-763627
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-763627 -n old-k8s-version-763627: exit status 7 (118.942296ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-763627 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (207.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-763627 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
E0408 19:30:35.423805  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/custom-flannel-827074/client.crt: no such file or directory
E0408 19:30:40.819311  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/enable-default-cni-827074/client.crt: no such file or directory
E0408 19:30:49.278924  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/calico-827074/client.crt: no such file or directory
E0408 19:30:51.621585  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/auto-827074/client.crt: no such file or directory
E0408 19:30:55.904783  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/custom-flannel-827074/client.crt: no such file or directory
E0408 19:31:01.300043  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/enable-default-cni-827074/client.crt: no such file or directory
E0408 19:31:09.142520  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/flannel-827074/client.crt: no such file or directory
E0408 19:31:09.147879  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/flannel-827074/client.crt: no such file or directory
E0408 19:31:09.158204  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/flannel-827074/client.crt: no such file or directory
E0408 19:31:09.178511  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/flannel-827074/client.crt: no such file or directory
E0408 19:31:09.218873  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/flannel-827074/client.crt: no such file or directory
E0408 19:31:09.299286  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/flannel-827074/client.crt: no such file or directory
E0408 19:31:09.459679  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/flannel-827074/client.crt: no such file or directory
E0408 19:31:09.780871  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/flannel-827074/client.crt: no such file or directory
E0408 19:31:10.421356  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/flannel-827074/client.crt: no such file or directory
E0408 19:31:11.701575  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/flannel-827074/client.crt: no such file or directory
E0408 19:31:14.262541  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/flannel-827074/client.crt: no such file or directory
E0408 19:31:18.572492  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/kindnet-827074/client.crt: no such file or directory
E0408 19:31:19.383554  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/flannel-827074/client.crt: no such file or directory
E0408 19:31:25.273891  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/bridge-827074/client.crt: no such file or directory
E0408 19:31:25.279249  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/bridge-827074/client.crt: no such file or directory
E0408 19:31:25.289620  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/bridge-827074/client.crt: no such file or directory
E0408 19:31:25.310011  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/bridge-827074/client.crt: no such file or directory
E0408 19:31:25.350399  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/bridge-827074/client.crt: no such file or directory
E0408 19:31:25.430848  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/bridge-827074/client.crt: no such file or directory
E0408 19:31:25.591344  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/bridge-827074/client.crt: no such file or directory
E0408 19:31:25.912204  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/bridge-827074/client.crt: no such file or directory
E0408 19:31:26.553272  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/bridge-827074/client.crt: no such file or directory
E0408 19:31:27.833991  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/bridge-827074/client.crt: no such file or directory
E0408 19:31:29.624746  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/flannel-827074/client.crt: no such file or directory
E0408 19:31:30.394566  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/bridge-827074/client.crt: no such file or directory
E0408 19:31:35.515183  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/bridge-827074/client.crt: no such file or directory
E0408 19:31:36.865037  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/custom-flannel-827074/client.crt: no such file or directory
E0408 19:31:41.578976  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.crt: no such file or directory
E0408 19:31:42.260584  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/enable-default-cni-827074/client.crt: no such file or directory
E0408 19:31:43.949410  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/functional-819351/client.crt: no such file or directory
E0408 19:31:45.756307  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/bridge-827074/client.crt: no such file or directory
E0408 19:31:50.106075  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/flannel-827074/client.crt: no such file or directory
E0408 19:32:06.236768  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/bridge-827074/client.crt: no such file or directory
E0408 19:32:11.200141  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/calico-827074/client.crt: no such file or directory
E0408 19:32:31.067097  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/flannel-827074/client.crt: no such file or directory
E0408 19:32:47.198038  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/bridge-827074/client.crt: no such file or directory
E0408 19:32:58.785965  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/custom-flannel-827074/client.crt: no such file or directory
E0408 19:33:04.181082  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/enable-default-cni-827074/client.crt: no such file or directory
E0408 19:33:07.776727  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/auto-827074/client.crt: no such file or directory
E0408 19:33:34.725587  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/kindnet-827074/client.crt: no such file or directory
E0408 19:33:35.461847  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/auto-827074/client.crt: no such file or directory
E0408 19:33:52.988339  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/flannel-827074/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-763627 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (3m26.944011826s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-763627 -n old-k8s-version-763627
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (207.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-lzplp" [9ccb7573-f94d-4dce-aeb9-c06e02bee535] Running
E0408 19:34:02.413198  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/kindnet-827074/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003946219s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-lzplp" [9ccb7573-f94d-4dce-aeb9-c06e02bee535] Running
E0408 19:34:09.118740  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/bridge-827074/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005148168s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-763627 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-763627 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-763627 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-763627 -n old-k8s-version-763627
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-763627 -n old-k8s-version-763627: exit status 2 (282.813295ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-763627 -n old-k8s-version-763627
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-763627 -n old-k8s-version-763627: exit status 2 (284.470885ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-763627 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-763627 -n old-k8s-version-763627
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-763627 -n old-k8s-version-763627
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (65.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-756327 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.1
E0408 19:34:27.353837  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/calico-827074/client.crt: no such file or directory
E0408 19:34:55.041352  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/calico-827074/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-756327 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.1: (1m5.939960538s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (65.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (16.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8vzzm" [31a28ef2-f576-4a13-ace6-64b0bd756bbb] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0408 19:35:14.942647  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/custom-flannel-827074/client.crt: no such file or directory
E0408 19:35:18.529680  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/addons-647801/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8vzzm" [31a28ef2-f576-4a13-ace6-64b0bd756bbb] Running
E0408 19:35:20.336820  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/enable-default-cni-827074/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.005426627s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (16.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-756327 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-756327 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.416724299s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-756327 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-756327 --alsologtostderr -v=3: (7.434028469s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-fkbjg" [ec095a4b-224c-4015-9535-f82bc74ecaff] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-779776cb65-fkbjg" [ec095a4b-224c-4015-9535-f82bc74ecaff] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.005753237s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8vzzm" [31a28ef2-f576-4a13-ace6-64b0bd756bbb] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.015823265s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-743523 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-756327 -n newest-cni-756327
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-756327 -n newest-cni-756327: exit status 7 (110.336925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-756327 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (43.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-756327 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-756327 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.1: (42.826999644s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-756327 -n newest-cni-756327
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (43.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-743523 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-743523 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-743523 --alsologtostderr -v=1: (1.142997947s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-743523 -n embed-certs-743523
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-743523 -n embed-certs-743523: exit status 2 (380.281188ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-743523 -n embed-certs-743523
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-743523 -n embed-certs-743523: exit status 2 (370.272409ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-743523 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-743523 -n embed-certs-743523
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-743523 -n embed-certs-743523
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-fkbjg" [ec095a4b-224c-4015-9535-f82bc74ecaff] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005982123s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-113058 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-113058 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-113058 --alsologtostderr -v=1
E0408 19:35:42.626649  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/custom-flannel-827074/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-113058 --alsologtostderr -v=1: (1.054055773s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-113058 -n no-preload-113058
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-113058 -n no-preload-113058: exit status 2 (304.260214ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-113058 -n no-preload-113058
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-113058 -n no-preload-113058: exit status 2 (301.0452ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-113058 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-113058 -n no-preload-113058
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-113058 -n no-preload-113058
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-gqtch" [860b64fa-7dba-4201-a76a-9725c652e377] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-gqtch" [860b64fa-7dba-4201-a76a-9725c652e377] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.009851947s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-gqtch" [860b64fa-7dba-4201-a76a-9725c652e377] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004373566s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-652174 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-652174 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-652174 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-652174 -n default-k8s-diff-port-652174
E0408 19:36:09.141481  618237 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18585-610499/.minikube/profiles/flannel-827074/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-652174 -n default-k8s-diff-port-652174: exit status 2 (269.084892ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-652174 -n default-k8s-diff-port-652174
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-652174 -n default-k8s-diff-port-652174: exit status 2 (270.654808ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-652174 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-652174 -n default-k8s-diff-port-652174
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-652174 -n default-k8s-diff-port-652174
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-756327 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-756327 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-756327 -n newest-cni-756327
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-756327 -n newest-cni-756327: exit status 2 (269.185425ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-756327 -n newest-cni-756327
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-756327 -n newest-cni-756327: exit status 2 (273.999862ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-756327 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-756327 -n newest-cni-756327
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-756327 -n newest-cni-756327
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.68s)

                                                
                                    

Test skip (39/333)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.29.3/cached-images 0
15 TestDownloadOnly/v1.29.3/binaries 0
16 TestDownloadOnly/v1.29.3/kubectl 0
23 TestDownloadOnly/v1.30.0-rc.1/cached-images 0
24 TestDownloadOnly/v1.30.0-rc.1/binaries 0
25 TestDownloadOnly/v1.30.0-rc.1/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
145 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
146 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
147 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
148 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
149 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
150 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
151 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
152 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
184 TestImageBuild 0
211 TestKicCustomNetwork 0
212 TestKicExistingNetwork 0
213 TestKicCustomSubnet 0
214 TestKicStaticIP 0
246 TestChangeNoneUser 0
249 TestScheduledStopWindows 0
251 TestSkaffold 0
253 TestInsufficientStorage 0
257 TestMissingContainerUpgrade 0
262 TestNetworkPlugins/group/kubenet 3.36
271 TestNetworkPlugins/group/cilium 3.67
286 TestStartStop/group/disable-driver-mounts 0.18
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-827074 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-827074

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-827074

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-827074

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-827074

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-827074

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-827074

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-827074

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-827074

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-827074

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-827074

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827074"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827074"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827074"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-827074

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827074"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827074"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-827074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-827074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-827074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-827074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-827074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-827074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-827074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-827074" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827074"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827074"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827074"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827074"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827074"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-827074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-827074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-827074" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827074"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827074"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827074"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827074"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827074"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-827074

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827074"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827074"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827074"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827074"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827074"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827074"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827074"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827074"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827074"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827074"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827074"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827074"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827074"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827074"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827074"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827074"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827074"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827074"

                                                
                                                
----------------------- debugLogs end: kubenet-827074 [took: 3.221793965s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-827074" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-827074
--- SKIP: TestNetworkPlugins/group/kubenet (3.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-827074 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-827074

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-827074

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-827074

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-827074

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-827074

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-827074

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-827074

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-827074

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-827074

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-827074

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827074"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827074"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827074"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-827074

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827074"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827074"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-827074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-827074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-827074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-827074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-827074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-827074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-827074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-827074" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827074"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827074"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827074"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827074"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827074"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-827074

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-827074

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-827074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-827074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-827074

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-827074

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-827074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-827074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-827074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-827074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-827074" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827074"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827074"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827074"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827074"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827074"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-827074

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827074"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827074"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827074"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827074"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827074"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827074"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827074"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827074"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827074"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827074"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827074"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827074"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827074"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827074"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827074"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827074"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827074"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-827074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827074"

                                                
                                                
----------------------- debugLogs end: cilium-827074 [took: 3.51338831s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-827074" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-827074
--- SKIP: TestNetworkPlugins/group/cilium (3.67s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-464436" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-464436
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard