Test Report: KVM_Linux_containerd 18793

                    
                      e5d92f0c4d7ea091f043b7a68a980727ecf8401d:2024-05-03:34314
                    
                

Test fail (1/325)

Order failed test Duration
36 TestAddons/parallel/Headlamp 3.21
x
+
TestAddons/parallel/Headlamp (3.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-146858 --alsologtostderr -v=1
addons_test.go:824: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-146858 --alsologtostderr -v=1: exit status 11 (339.721144ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 21:35:11.155731   15887 out.go:291] Setting OutFile to fd 1 ...
	I0503 21:35:11.155870   15887 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 21:35:11.155880   15887 out.go:304] Setting ErrFile to fd 2...
	I0503 21:35:11.155884   15887 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 21:35:11.156100   15887 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18793-6010/.minikube/bin
	I0503 21:35:11.156352   15887 mustload.go:65] Loading cluster: addons-146858
	I0503 21:35:11.156687   15887 config.go:182] Loaded profile config "addons-146858": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0503 21:35:11.156706   15887 addons.go:597] checking whether the cluster is paused
	I0503 21:35:11.156796   15887 config.go:182] Loaded profile config "addons-146858": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0503 21:35:11.156809   15887 host.go:66] Checking if "addons-146858" exists ...
	I0503 21:35:11.157149   15887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:35:11.157190   15887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:35:11.174284   15887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41403
	I0503 21:35:11.174884   15887 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:35:11.175534   15887 main.go:141] libmachine: Using API Version  1
	I0503 21:35:11.175564   15887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:35:11.175960   15887 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:35:11.176174   15887 main.go:141] libmachine: (addons-146858) Calling .GetState
	I0503 21:35:11.177965   15887 main.go:141] libmachine: (addons-146858) Calling .DriverName
	I0503 21:35:11.178192   15887 ssh_runner.go:195] Run: systemctl --version
	I0503 21:35:11.178220   15887 main.go:141] libmachine: (addons-146858) Calling .GetSSHHostname
	I0503 21:35:11.180816   15887 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:35:11.181266   15887 main.go:141] libmachine: (addons-146858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8d:3c", ip: ""} in network mk-addons-146858: {Iface:virbr1 ExpiryTime:2024-05-03 22:31:28 +0000 UTC Type:0 Mac:52:54:00:f7:8d:3c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-146858 Clientid:01:52:54:00:f7:8d:3c}
	I0503 21:35:11.181301   15887 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined IP address 192.168.39.58 and MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:35:11.181445   15887 main.go:141] libmachine: (addons-146858) Calling .GetSSHPort
	I0503 21:35:11.181606   15887 main.go:141] libmachine: (addons-146858) Calling .GetSSHKeyPath
	I0503 21:35:11.181740   15887 main.go:141] libmachine: (addons-146858) Calling .GetSSHUsername
	I0503 21:35:11.181876   15887 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18793-6010/.minikube/machines/addons-146858/id_rsa Username:docker}
	I0503 21:35:11.273564   15887 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0503 21:35:11.273626   15887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0503 21:35:11.354780   15887 cri.go:89] found id: "1233f167075e17a821607618c79ac9bb5d83bc412b8303909263bc24a9b621f4"
	I0503 21:35:11.354803   15887 cri.go:89] found id: "a911c81323b67142203d2bcfe5e8ff3acad5ad53b9a36b80b35c50ddeae6d807"
	I0503 21:35:11.354808   15887 cri.go:89] found id: "b7d5278a2f46762067c9fecfad6991ce9822ef9bcbdf19dff845f7518828be63"
	I0503 21:35:11.354812   15887 cri.go:89] found id: "b02202cc339e9af2bb21b8d38f301c9fefb673572dea4ea3bc4de2be8a795fa8"
	I0503 21:35:11.354816   15887 cri.go:89] found id: "c189f253af3205fc38cc8f2029413444ebd579931ec7137d0a7fa8311a4e291e"
	I0503 21:35:11.354828   15887 cri.go:89] found id: "7bbc5fe4f7c33fd67bafaf27cbefe95036d86ef6604523e1462cdf2f53cb652e"
	I0503 21:35:11.354832   15887 cri.go:89] found id: "86b061197c5aefbfff8814684df07f0e650d11159f8dee80ae2d6d6fe38f748a"
	I0503 21:35:11.354835   15887 cri.go:89] found id: "a85ae23330e5e28a07a3030763cc485640b095f6d64ede717373eec8686fc398"
	I0503 21:35:11.354839   15887 cri.go:89] found id: "bb20a8a45883a63d8445df01705ea29d98013a0990a32fb2135a06cfd30c4a6d"
	I0503 21:35:11.354853   15887 cri.go:89] found id: "4d394e7c37f7ef1a3557951f134f05ef1a23c3050606a04b1fe1437af9ee6232"
	I0503 21:35:11.354863   15887 cri.go:89] found id: "1fffe0f8196af47488733e0c9f63bd3766d98348a57897e39456c4786be78e8b"
	I0503 21:35:11.354867   15887 cri.go:89] found id: "b1cc0e651fa0ff1427680d64bc774583b8e7d274b801719d76d30db8d8451865"
	I0503 21:35:11.354871   15887 cri.go:89] found id: "3ff996d58c018ee2c8f1fcb246aa7cfba2824b2a21a4402239379f26b7e7c50b"
	I0503 21:35:11.354875   15887 cri.go:89] found id: "6f5b5676dfad3444e69a7d5df30bdfcfd85604cc8c0067b79c31e438eff8f935"
	I0503 21:35:11.354880   15887 cri.go:89] found id: "4cedeb46667029b5523175b78f462ee2ef49e68bc455195cc8d74191a1ac9e1e"
	I0503 21:35:11.354883   15887 cri.go:89] found id: "be5c6e8034bc06f90bff0111aaededf995be68f104e07f3b794cae818ab39c3c"
	I0503 21:35:11.354885   15887 cri.go:89] found id: "e5f7463a362b733d50da47fdf1a17ee01670c963c8ccbc76596f16fb32f3fb53"
	I0503 21:35:11.354889   15887 cri.go:89] found id: "cd7a346eb205cd7ddd9b975aaec3202ddbe0fe84a74075617aeee8e2b67fd983"
	I0503 21:35:11.354891   15887 cri.go:89] found id: "56018d31e5460f3327a258b50077e8758121453828a53ac104da346b83e99d8b"
	I0503 21:35:11.354894   15887 cri.go:89] found id: ""
	I0503 21:35:11.354942   15887 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0503 21:35:11.429272   15887 main.go:141] libmachine: Making call to close driver server
	I0503 21:35:11.429288   15887 main.go:141] libmachine: (addons-146858) Calling .Close
	I0503 21:35:11.429637   15887 main.go:141] libmachine: Successfully made call to close driver server
	I0503 21:35:11.429655   15887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0503 21:35:11.431944   15887 out.go:177] 
	W0503 21:35:11.433446   15887 out.go:239] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-03T21:35:11Z" level=error msg="stat /run/containerd/runc/k8s.io/e914f04c12efe3e60e0afd69d909e54328b4ae9c34ec46abe7a2323f3a6f6fbf: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-03T21:35:11Z" level=error msg="stat /run/containerd/runc/k8s.io/e914f04c12efe3e60e0afd69d909e54328b4ae9c34ec46abe7a2323f3a6f6fbf: no such file or directory"
	
	W0503 21:35:11.433462   15887 out.go:239] * 
	* 
	W0503 21:35:11.435240   15887 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 21:35:11.436745   15887 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:826: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-146858 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-146858 -n addons-146858
helpers_test.go:244: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-146858 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-146858 logs -n 25: (1.851325998s)
helpers_test.go:252: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-324176 | jenkins | v1.33.0 | 03 May 24 21:29 UTC |                     |
	|         | -p download-only-324176              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.33.0 | 03 May 24 21:30 UTC | 03 May 24 21:30 UTC |
	| delete  | -p download-only-324176              | download-only-324176 | jenkins | v1.33.0 | 03 May 24 21:30 UTC | 03 May 24 21:30 UTC |
	| start   | -o=json --download-only              | download-only-360729 | jenkins | v1.33.0 | 03 May 24 21:30 UTC |                     |
	|         | -p download-only-360729              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.33.0 | 03 May 24 21:31 UTC | 03 May 24 21:31 UTC |
	| delete  | -p download-only-360729              | download-only-360729 | jenkins | v1.33.0 | 03 May 24 21:31 UTC | 03 May 24 21:31 UTC |
	| delete  | -p download-only-324176              | download-only-324176 | jenkins | v1.33.0 | 03 May 24 21:31 UTC | 03 May 24 21:31 UTC |
	| delete  | -p download-only-360729              | download-only-360729 | jenkins | v1.33.0 | 03 May 24 21:31 UTC | 03 May 24 21:31 UTC |
	| start   | --download-only -p                   | binary-mirror-827678 | jenkins | v1.33.0 | 03 May 24 21:31 UTC |                     |
	|         | binary-mirror-827678                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:42999               |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-827678              | binary-mirror-827678 | jenkins | v1.33.0 | 03 May 24 21:31 UTC | 03 May 24 21:31 UTC |
	| addons  | enable dashboard -p                  | addons-146858        | jenkins | v1.33.0 | 03 May 24 21:31 UTC |                     |
	|         | addons-146858                        |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-146858        | jenkins | v1.33.0 | 03 May 24 21:31 UTC |                     |
	|         | addons-146858                        |                      |         |         |                     |                     |
	| start   | -p addons-146858 --wait=true         | addons-146858        | jenkins | v1.33.0 | 03 May 24 21:31 UTC | 03 May 24 21:34 UTC |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2          |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --addons=ingress                     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                      |         |         |                     |                     |
	| addons  | addons-146858 addons                 | addons-146858        | jenkins | v1.33.0 | 03 May 24 21:34 UTC | 03 May 24 21:34 UTC |
	|         | disable metrics-server               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-146858        | jenkins | v1.33.0 | 03 May 24 21:34 UTC | 03 May 24 21:34 UTC |
	|         | -p addons-146858                     |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-146858        | jenkins | v1.33.0 | 03 May 24 21:35 UTC | 03 May 24 21:35 UTC |
	|         | addons-146858                        |                      |         |         |                     |                     |
	| ip      | addons-146858 ip                     | addons-146858        | jenkins | v1.33.0 | 03 May 24 21:35 UTC | 03 May 24 21:35 UTC |
	| addons  | addons-146858 addons disable         | addons-146858        | jenkins | v1.33.0 | 03 May 24 21:35 UTC | 03 May 24 21:35 UTC |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-146858        | jenkins | v1.33.0 | 03 May 24 21:35 UTC |                     |
	|         | -p addons-146858                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/03 21:31:11
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0503 21:31:11.209373   14195 out.go:291] Setting OutFile to fd 1 ...
	I0503 21:31:11.209465   14195 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 21:31:11.209479   14195 out.go:304] Setting ErrFile to fd 2...
	I0503 21:31:11.209486   14195 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 21:31:11.209695   14195 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18793-6010/.minikube/bin
	I0503 21:31:11.210269   14195 out.go:298] Setting JSON to false
	I0503 21:31:11.211103   14195 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":812,"bootTime":1714771059,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0503 21:31:11.211158   14195 start.go:139] virtualization: kvm guest
	I0503 21:31:11.213199   14195 out.go:177] * [addons-146858] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0503 21:31:11.215048   14195 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 21:31:11.215051   14195 notify.go:220] Checking for updates...
	I0503 21:31:11.216773   14195 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 21:31:11.218118   14195 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18793-6010/kubeconfig
	I0503 21:31:11.219358   14195 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18793-6010/.minikube
	I0503 21:31:11.220621   14195 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0503 21:31:11.221849   14195 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 21:31:11.223266   14195 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 21:31:11.253702   14195 out.go:177] * Using the kvm2 driver based on user configuration
	I0503 21:31:11.255003   14195 start.go:297] selected driver: kvm2
	I0503 21:31:11.255021   14195 start.go:901] validating driver "kvm2" against <nil>
	I0503 21:31:11.255032   14195 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 21:31:11.255723   14195 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 21:31:11.255813   14195 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18793-6010/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0503 21:31:11.269706   14195 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0503 21:31:11.269753   14195 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0503 21:31:11.269956   14195 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0503 21:31:11.270020   14195 cni.go:84] Creating CNI manager for ""
	I0503 21:31:11.270037   14195 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0503 21:31:11.270047   14195 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0503 21:31:11.270120   14195 start.go:340] cluster config:
	{Name:addons-146858 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-146858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
ontainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 21:31:11.270240   14195 iso.go:125] acquiring lock: {Name:mkac3cf29445902eddb693be62f8a45d3ca86578 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 21:31:11.272063   14195 out.go:177] * Starting "addons-146858" primary control-plane node in "addons-146858" cluster
	I0503 21:31:11.273456   14195 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime containerd
	I0503 21:31:11.273500   14195 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18793-6010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-containerd-overlay2-amd64.tar.lz4
	I0503 21:31:11.273511   14195 cache.go:56] Caching tarball of preloaded images
	I0503 21:31:11.273595   14195 preload.go:173] Found /home/jenkins/minikube-integration/18793-6010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0503 21:31:11.273608   14195 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on containerd
	I0503 21:31:11.273934   14195 profile.go:143] Saving config to /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/config.json ...
	I0503 21:31:11.273969   14195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/config.json: {Name:mka70468e4da1aa06621a0e29ba7a7f13e7d4de6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 21:31:11.274114   14195 start.go:360] acquireMachinesLock for addons-146858: {Name:mk9fd23d34ab050410daa41c2db8382c405b2c19 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 21:31:11.274179   14195 start.go:364] duration metric: took 49.66µs to acquireMachinesLock for "addons-146858"
	I0503 21:31:11.274205   14195 start.go:93] Provisioning new machine with config: &{Name:addons-146858 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:addons-146858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0503 21:31:11.274269   14195 start.go:125] createHost starting for "" (driver="kvm2")
	I0503 21:31:11.276095   14195 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0503 21:31:11.276225   14195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:31:11.276282   14195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:31:11.290596   14195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35295
	I0503 21:31:11.291006   14195 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:31:11.291561   14195 main.go:141] libmachine: Using API Version  1
	I0503 21:31:11.291576   14195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:31:11.291991   14195 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:31:11.292221   14195 main.go:141] libmachine: (addons-146858) Calling .GetMachineName
	I0503 21:31:11.292380   14195 main.go:141] libmachine: (addons-146858) Calling .DriverName
	I0503 21:31:11.292559   14195 start.go:159] libmachine.API.Create for "addons-146858" (driver="kvm2")
	I0503 21:31:11.292590   14195 client.go:168] LocalClient.Create starting
	I0503 21:31:11.292632   14195 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18793-6010/.minikube/certs/ca.pem
	I0503 21:31:11.454651   14195 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18793-6010/.minikube/certs/cert.pem
	I0503 21:31:11.785200   14195 main.go:141] libmachine: Running pre-create checks...
	I0503 21:31:11.785225   14195 main.go:141] libmachine: (addons-146858) Calling .PreCreateCheck
	I0503 21:31:11.785733   14195 main.go:141] libmachine: (addons-146858) Calling .GetConfigRaw
	I0503 21:31:11.786201   14195 main.go:141] libmachine: Creating machine...
	I0503 21:31:11.786216   14195 main.go:141] libmachine: (addons-146858) Calling .Create
	I0503 21:31:11.786364   14195 main.go:141] libmachine: (addons-146858) Creating KVM machine...
	I0503 21:31:11.787691   14195 main.go:141] libmachine: (addons-146858) DBG | found existing default KVM network
	I0503 21:31:11.788371   14195 main.go:141] libmachine: (addons-146858) DBG | I0503 21:31:11.788236   14217 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0503 21:31:11.788422   14195 main.go:141] libmachine: (addons-146858) DBG | created network xml: 
	I0503 21:31:11.788450   14195 main.go:141] libmachine: (addons-146858) DBG | <network>
	I0503 21:31:11.788469   14195 main.go:141] libmachine: (addons-146858) DBG |   <name>mk-addons-146858</name>
	I0503 21:31:11.788484   14195 main.go:141] libmachine: (addons-146858) DBG |   <dns enable='no'/>
	I0503 21:31:11.788490   14195 main.go:141] libmachine: (addons-146858) DBG |   
	I0503 21:31:11.788497   14195 main.go:141] libmachine: (addons-146858) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0503 21:31:11.788507   14195 main.go:141] libmachine: (addons-146858) DBG |     <dhcp>
	I0503 21:31:11.788518   14195 main.go:141] libmachine: (addons-146858) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0503 21:31:11.788535   14195 main.go:141] libmachine: (addons-146858) DBG |     </dhcp>
	I0503 21:31:11.788554   14195 main.go:141] libmachine: (addons-146858) DBG |   </ip>
	I0503 21:31:11.788567   14195 main.go:141] libmachine: (addons-146858) DBG |   
	I0503 21:31:11.788578   14195 main.go:141] libmachine: (addons-146858) DBG | </network>
	I0503 21:31:11.788597   14195 main.go:141] libmachine: (addons-146858) DBG | 
	I0503 21:31:11.793595   14195 main.go:141] libmachine: (addons-146858) DBG | trying to create private KVM network mk-addons-146858 192.168.39.0/24...
	I0503 21:31:11.853422   14195 main.go:141] libmachine: (addons-146858) DBG | private KVM network mk-addons-146858 192.168.39.0/24 created
	I0503 21:31:11.853453   14195 main.go:141] libmachine: (addons-146858) DBG | I0503 21:31:11.853386   14217 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18793-6010/.minikube
	I0503 21:31:11.853468   14195 main.go:141] libmachine: (addons-146858) Setting up store path in /home/jenkins/minikube-integration/18793-6010/.minikube/machines/addons-146858 ...
	I0503 21:31:11.853490   14195 main.go:141] libmachine: (addons-146858) Building disk image from file:///home/jenkins/minikube-integration/18793-6010/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso
	I0503 21:31:11.853521   14195 main.go:141] libmachine: (addons-146858) Downloading /home/jenkins/minikube-integration/18793-6010/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18793-6010/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0503 21:31:12.100269   14195 main.go:141] libmachine: (addons-146858) DBG | I0503 21:31:12.100141   14217 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18793-6010/.minikube/machines/addons-146858/id_rsa...
	I0503 21:31:12.415766   14195 main.go:141] libmachine: (addons-146858) DBG | I0503 21:31:12.415646   14217 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18793-6010/.minikube/machines/addons-146858/addons-146858.rawdisk...
	I0503 21:31:12.415787   14195 main.go:141] libmachine: (addons-146858) DBG | Writing magic tar header
	I0503 21:31:12.415796   14195 main.go:141] libmachine: (addons-146858) DBG | Writing SSH key tar header
	I0503 21:31:12.415804   14195 main.go:141] libmachine: (addons-146858) DBG | I0503 21:31:12.415771   14217 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18793-6010/.minikube/machines/addons-146858 ...
	I0503 21:31:12.415867   14195 main.go:141] libmachine: (addons-146858) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18793-6010/.minikube/machines/addons-146858
	I0503 21:31:12.415890   14195 main.go:141] libmachine: (addons-146858) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18793-6010/.minikube/machines
	I0503 21:31:12.415903   14195 main.go:141] libmachine: (addons-146858) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18793-6010/.minikube
	I0503 21:31:12.415914   14195 main.go:141] libmachine: (addons-146858) Setting executable bit set on /home/jenkins/minikube-integration/18793-6010/.minikube/machines/addons-146858 (perms=drwx------)
	I0503 21:31:12.415943   14195 main.go:141] libmachine: (addons-146858) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18793-6010
	I0503 21:31:12.415971   14195 main.go:141] libmachine: (addons-146858) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0503 21:31:12.415983   14195 main.go:141] libmachine: (addons-146858) Setting executable bit set on /home/jenkins/minikube-integration/18793-6010/.minikube/machines (perms=drwxr-xr-x)
	I0503 21:31:12.415997   14195 main.go:141] libmachine: (addons-146858) Setting executable bit set on /home/jenkins/minikube-integration/18793-6010/.minikube (perms=drwxr-xr-x)
	I0503 21:31:12.416006   14195 main.go:141] libmachine: (addons-146858) Setting executable bit set on /home/jenkins/minikube-integration/18793-6010 (perms=drwxrwxr-x)
	I0503 21:31:12.416017   14195 main.go:141] libmachine: (addons-146858) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0503 21:31:12.416036   14195 main.go:141] libmachine: (addons-146858) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0503 21:31:12.416049   14195 main.go:141] libmachine: (addons-146858) DBG | Checking permissions on dir: /home/jenkins
	I0503 21:31:12.416064   14195 main.go:141] libmachine: (addons-146858) DBG | Checking permissions on dir: /home
	I0503 21:31:12.416074   14195 main.go:141] libmachine: (addons-146858) DBG | Skipping /home - not owner
	I0503 21:31:12.416083   14195 main.go:141] libmachine: (addons-146858) Creating domain...
	I0503 21:31:12.417023   14195 main.go:141] libmachine: (addons-146858) define libvirt domain using xml: 
	I0503 21:31:12.417051   14195 main.go:141] libmachine: (addons-146858) <domain type='kvm'>
	I0503 21:31:12.417062   14195 main.go:141] libmachine: (addons-146858)   <name>addons-146858</name>
	I0503 21:31:12.417070   14195 main.go:141] libmachine: (addons-146858)   <memory unit='MiB'>4000</memory>
	I0503 21:31:12.417079   14195 main.go:141] libmachine: (addons-146858)   <vcpu>2</vcpu>
	I0503 21:31:12.417094   14195 main.go:141] libmachine: (addons-146858)   <features>
	I0503 21:31:12.417099   14195 main.go:141] libmachine: (addons-146858)     <acpi/>
	I0503 21:31:12.417106   14195 main.go:141] libmachine: (addons-146858)     <apic/>
	I0503 21:31:12.417123   14195 main.go:141] libmachine: (addons-146858)     <pae/>
	I0503 21:31:12.417136   14195 main.go:141] libmachine: (addons-146858)     
	I0503 21:31:12.417185   14195 main.go:141] libmachine: (addons-146858)   </features>
	I0503 21:31:12.417207   14195 main.go:141] libmachine: (addons-146858)   <cpu mode='host-passthrough'>
	I0503 21:31:12.417213   14195 main.go:141] libmachine: (addons-146858)   
	I0503 21:31:12.417220   14195 main.go:141] libmachine: (addons-146858)   </cpu>
	I0503 21:31:12.417228   14195 main.go:141] libmachine: (addons-146858)   <os>
	I0503 21:31:12.417233   14195 main.go:141] libmachine: (addons-146858)     <type>hvm</type>
	I0503 21:31:12.417241   14195 main.go:141] libmachine: (addons-146858)     <boot dev='cdrom'/>
	I0503 21:31:12.417247   14195 main.go:141] libmachine: (addons-146858)     <boot dev='hd'/>
	I0503 21:31:12.417254   14195 main.go:141] libmachine: (addons-146858)     <bootmenu enable='no'/>
	I0503 21:31:12.417258   14195 main.go:141] libmachine: (addons-146858)   </os>
	I0503 21:31:12.417267   14195 main.go:141] libmachine: (addons-146858)   <devices>
	I0503 21:31:12.417271   14195 main.go:141] libmachine: (addons-146858)     <disk type='file' device='cdrom'>
	I0503 21:31:12.417301   14195 main.go:141] libmachine: (addons-146858)       <source file='/home/jenkins/minikube-integration/18793-6010/.minikube/machines/addons-146858/boot2docker.iso'/>
	I0503 21:31:12.417325   14195 main.go:141] libmachine: (addons-146858)       <target dev='hdc' bus='scsi'/>
	I0503 21:31:12.417341   14195 main.go:141] libmachine: (addons-146858)       <readonly/>
	I0503 21:31:12.417356   14195 main.go:141] libmachine: (addons-146858)     </disk>
	I0503 21:31:12.417369   14195 main.go:141] libmachine: (addons-146858)     <disk type='file' device='disk'>
	I0503 21:31:12.417382   14195 main.go:141] libmachine: (addons-146858)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0503 21:31:12.417398   14195 main.go:141] libmachine: (addons-146858)       <source file='/home/jenkins/minikube-integration/18793-6010/.minikube/machines/addons-146858/addons-146858.rawdisk'/>
	I0503 21:31:12.417407   14195 main.go:141] libmachine: (addons-146858)       <target dev='hda' bus='virtio'/>
	I0503 21:31:12.417412   14195 main.go:141] libmachine: (addons-146858)     </disk>
	I0503 21:31:12.417423   14195 main.go:141] libmachine: (addons-146858)     <interface type='network'>
	I0503 21:31:12.417436   14195 main.go:141] libmachine: (addons-146858)       <source network='mk-addons-146858'/>
	I0503 21:31:12.417451   14195 main.go:141] libmachine: (addons-146858)       <model type='virtio'/>
	I0503 21:31:12.417467   14195 main.go:141] libmachine: (addons-146858)     </interface>
	I0503 21:31:12.417483   14195 main.go:141] libmachine: (addons-146858)     <interface type='network'>
	I0503 21:31:12.417499   14195 main.go:141] libmachine: (addons-146858)       <source network='default'/>
	I0503 21:31:12.417509   14195 main.go:141] libmachine: (addons-146858)       <model type='virtio'/>
	I0503 21:31:12.417513   14195 main.go:141] libmachine: (addons-146858)     </interface>
	I0503 21:31:12.417518   14195 main.go:141] libmachine: (addons-146858)     <serial type='pty'>
	I0503 21:31:12.417526   14195 main.go:141] libmachine: (addons-146858)       <target port='0'/>
	I0503 21:31:12.417531   14195 main.go:141] libmachine: (addons-146858)     </serial>
	I0503 21:31:12.417538   14195 main.go:141] libmachine: (addons-146858)     <console type='pty'>
	I0503 21:31:12.417547   14195 main.go:141] libmachine: (addons-146858)       <target type='serial' port='0'/>
	I0503 21:31:12.417554   14195 main.go:141] libmachine: (addons-146858)     </console>
	I0503 21:31:12.417560   14195 main.go:141] libmachine: (addons-146858)     <rng model='virtio'>
	I0503 21:31:12.417569   14195 main.go:141] libmachine: (addons-146858)       <backend model='random'>/dev/random</backend>
	I0503 21:31:12.417575   14195 main.go:141] libmachine: (addons-146858)     </rng>
	I0503 21:31:12.417581   14195 main.go:141] libmachine: (addons-146858)     
	I0503 21:31:12.417586   14195 main.go:141] libmachine: (addons-146858)     
	I0503 21:31:12.417591   14195 main.go:141] libmachine: (addons-146858)   </devices>
	I0503 21:31:12.417597   14195 main.go:141] libmachine: (addons-146858) </domain>
	I0503 21:31:12.417603   14195 main.go:141] libmachine: (addons-146858) 
	I0503 21:31:12.423353   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:7d:f9:ab in network default
	I0503 21:31:12.423925   14195 main.go:141] libmachine: (addons-146858) Ensuring networks are active...
	I0503 21:31:12.423949   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:12.424559   14195 main.go:141] libmachine: (addons-146858) Ensuring network default is active
	I0503 21:31:12.424793   14195 main.go:141] libmachine: (addons-146858) Ensuring network mk-addons-146858 is active
	I0503 21:31:12.425221   14195 main.go:141] libmachine: (addons-146858) Getting domain xml...
	I0503 21:31:12.425770   14195 main.go:141] libmachine: (addons-146858) Creating domain...
	I0503 21:31:13.790658   14195 main.go:141] libmachine: (addons-146858) Waiting to get IP...
	I0503 21:31:13.791384   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:13.791814   14195 main.go:141] libmachine: (addons-146858) DBG | unable to find current IP address of domain addons-146858 in network mk-addons-146858
	I0503 21:31:13.791838   14195 main.go:141] libmachine: (addons-146858) DBG | I0503 21:31:13.791795   14217 retry.go:31] will retry after 233.311439ms: waiting for machine to come up
	I0503 21:31:14.026872   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:14.027342   14195 main.go:141] libmachine: (addons-146858) DBG | unable to find current IP address of domain addons-146858 in network mk-addons-146858
	I0503 21:31:14.027366   14195 main.go:141] libmachine: (addons-146858) DBG | I0503 21:31:14.027304   14217 retry.go:31] will retry after 294.777984ms: waiting for machine to come up
	I0503 21:31:14.323718   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:14.324076   14195 main.go:141] libmachine: (addons-146858) DBG | unable to find current IP address of domain addons-146858 in network mk-addons-146858
	I0503 21:31:14.324105   14195 main.go:141] libmachine: (addons-146858) DBG | I0503 21:31:14.324026   14217 retry.go:31] will retry after 345.663125ms: waiting for machine to come up
	I0503 21:31:14.671290   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:14.671642   14195 main.go:141] libmachine: (addons-146858) DBG | unable to find current IP address of domain addons-146858 in network mk-addons-146858
	I0503 21:31:14.671677   14195 main.go:141] libmachine: (addons-146858) DBG | I0503 21:31:14.671604   14217 retry.go:31] will retry after 481.988973ms: waiting for machine to come up
	I0503 21:31:15.155237   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:15.155601   14195 main.go:141] libmachine: (addons-146858) DBG | unable to find current IP address of domain addons-146858 in network mk-addons-146858
	I0503 21:31:15.155631   14195 main.go:141] libmachine: (addons-146858) DBG | I0503 21:31:15.155558   14217 retry.go:31] will retry after 712.740404ms: waiting for machine to come up
	I0503 21:31:15.869357   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:15.869783   14195 main.go:141] libmachine: (addons-146858) DBG | unable to find current IP address of domain addons-146858 in network mk-addons-146858
	I0503 21:31:15.869813   14195 main.go:141] libmachine: (addons-146858) DBG | I0503 21:31:15.869732   14217 retry.go:31] will retry after 750.008166ms: waiting for machine to come up
	I0503 21:31:16.621440   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:16.621852   14195 main.go:141] libmachine: (addons-146858) DBG | unable to find current IP address of domain addons-146858 in network mk-addons-146858
	I0503 21:31:16.621881   14195 main.go:141] libmachine: (addons-146858) DBG | I0503 21:31:16.621815   14217 retry.go:31] will retry after 1.013318903s: waiting for machine to come up
	I0503 21:31:17.636984   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:17.637368   14195 main.go:141] libmachine: (addons-146858) DBG | unable to find current IP address of domain addons-146858 in network mk-addons-146858
	I0503 21:31:17.637392   14195 main.go:141] libmachine: (addons-146858) DBG | I0503 21:31:17.637326   14217 retry.go:31] will retry after 1.244355304s: waiting for machine to come up
	I0503 21:31:18.883937   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:18.884342   14195 main.go:141] libmachine: (addons-146858) DBG | unable to find current IP address of domain addons-146858 in network mk-addons-146858
	I0503 21:31:18.884373   14195 main.go:141] libmachine: (addons-146858) DBG | I0503 21:31:18.884323   14217 retry.go:31] will retry after 1.159668534s: waiting for machine to come up
	I0503 21:31:20.045720   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:20.046063   14195 main.go:141] libmachine: (addons-146858) DBG | unable to find current IP address of domain addons-146858 in network mk-addons-146858
	I0503 21:31:20.046096   14195 main.go:141] libmachine: (addons-146858) DBG | I0503 21:31:20.046015   14217 retry.go:31] will retry after 1.807954468s: waiting for machine to come up
	I0503 21:31:21.855706   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:21.856180   14195 main.go:141] libmachine: (addons-146858) DBG | unable to find current IP address of domain addons-146858 in network mk-addons-146858
	I0503 21:31:21.856241   14195 main.go:141] libmachine: (addons-146858) DBG | I0503 21:31:21.856158   14217 retry.go:31] will retry after 2.618279593s: waiting for machine to come up
	I0503 21:31:24.476798   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:24.477305   14195 main.go:141] libmachine: (addons-146858) DBG | unable to find current IP address of domain addons-146858 in network mk-addons-146858
	I0503 21:31:24.477330   14195 main.go:141] libmachine: (addons-146858) DBG | I0503 21:31:24.477246   14217 retry.go:31] will retry after 3.049762836s: waiting for machine to come up
	I0503 21:31:27.528738   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:27.529199   14195 main.go:141] libmachine: (addons-146858) DBG | unable to find current IP address of domain addons-146858 in network mk-addons-146858
	I0503 21:31:27.529245   14195 main.go:141] libmachine: (addons-146858) DBG | I0503 21:31:27.529149   14217 retry.go:31] will retry after 3.162319821s: waiting for machine to come up
	I0503 21:31:30.695386   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:30.695853   14195 main.go:141] libmachine: (addons-146858) DBG | unable to find current IP address of domain addons-146858 in network mk-addons-146858
	I0503 21:31:30.695879   14195 main.go:141] libmachine: (addons-146858) DBG | I0503 21:31:30.695812   14217 retry.go:31] will retry after 4.506588629s: waiting for machine to come up
	I0503 21:31:35.203752   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:35.204153   14195 main.go:141] libmachine: (addons-146858) Found IP for machine: 192.168.39.58
	I0503 21:31:35.204194   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has current primary IP address 192.168.39.58 and MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:35.204209   14195 main.go:141] libmachine: (addons-146858) Reserving static IP address...
	I0503 21:31:35.204483   14195 main.go:141] libmachine: (addons-146858) DBG | unable to find host DHCP lease matching {name: "addons-146858", mac: "52:54:00:f7:8d:3c", ip: "192.168.39.58"} in network mk-addons-146858
	I0503 21:31:35.273463   14195 main.go:141] libmachine: (addons-146858) DBG | Getting to WaitForSSH function...
	I0503 21:31:35.273497   14195 main.go:141] libmachine: (addons-146858) Reserved static IP address: 192.168.39.58
	I0503 21:31:35.273511   14195 main.go:141] libmachine: (addons-146858) Waiting for SSH to be available...
	I0503 21:31:35.276030   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:35.276397   14195 main.go:141] libmachine: (addons-146858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8d:3c", ip: ""} in network mk-addons-146858: {Iface:virbr1 ExpiryTime:2024-05-03 22:31:28 +0000 UTC Type:0 Mac:52:54:00:f7:8d:3c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f7:8d:3c}
	I0503 21:31:35.276419   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined IP address 192.168.39.58 and MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:35.276568   14195 main.go:141] libmachine: (addons-146858) DBG | Using SSH client type: external
	I0503 21:31:35.276597   14195 main.go:141] libmachine: (addons-146858) DBG | Using SSH private key: /home/jenkins/minikube-integration/18793-6010/.minikube/machines/addons-146858/id_rsa (-rw-------)
	I0503 21:31:35.276627   14195 main.go:141] libmachine: (addons-146858) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18793-6010/.minikube/machines/addons-146858/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0503 21:31:35.276640   14195 main.go:141] libmachine: (addons-146858) DBG | About to run SSH command:
	I0503 21:31:35.276655   14195 main.go:141] libmachine: (addons-146858) DBG | exit 0
	I0503 21:31:35.408025   14195 main.go:141] libmachine: (addons-146858) DBG | SSH cmd err, output: <nil>: 
	I0503 21:31:35.408301   14195 main.go:141] libmachine: (addons-146858) KVM machine creation complete!
	I0503 21:31:35.408649   14195 main.go:141] libmachine: (addons-146858) Calling .GetConfigRaw
	I0503 21:31:35.409154   14195 main.go:141] libmachine: (addons-146858) Calling .DriverName
	I0503 21:31:35.409377   14195 main.go:141] libmachine: (addons-146858) Calling .DriverName
	I0503 21:31:35.409551   14195 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0503 21:31:35.409567   14195 main.go:141] libmachine: (addons-146858) Calling .GetState
	I0503 21:31:35.410896   14195 main.go:141] libmachine: Detecting operating system of created instance...
	I0503 21:31:35.410909   14195 main.go:141] libmachine: Waiting for SSH to be available...
	I0503 21:31:35.410914   14195 main.go:141] libmachine: Getting to WaitForSSH function...
	I0503 21:31:35.410920   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHHostname
	I0503 21:31:35.413032   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:35.413342   14195 main.go:141] libmachine: (addons-146858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8d:3c", ip: ""} in network mk-addons-146858: {Iface:virbr1 ExpiryTime:2024-05-03 22:31:28 +0000 UTC Type:0 Mac:52:54:00:f7:8d:3c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-146858 Clientid:01:52:54:00:f7:8d:3c}
	I0503 21:31:35.413370   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined IP address 192.168.39.58 and MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:35.413440   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHPort
	I0503 21:31:35.413604   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHKeyPath
	I0503 21:31:35.413800   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHKeyPath
	I0503 21:31:35.413919   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHUsername
	I0503 21:31:35.414087   14195 main.go:141] libmachine: Using SSH client type: native
	I0503 21:31:35.414247   14195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0503 21:31:35.414256   14195 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0503 21:31:35.515042   14195 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0503 21:31:35.515067   14195 main.go:141] libmachine: Detecting the provisioner...
	I0503 21:31:35.515076   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHHostname
	I0503 21:31:35.517735   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:35.518114   14195 main.go:141] libmachine: (addons-146858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8d:3c", ip: ""} in network mk-addons-146858: {Iface:virbr1 ExpiryTime:2024-05-03 22:31:28 +0000 UTC Type:0 Mac:52:54:00:f7:8d:3c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-146858 Clientid:01:52:54:00:f7:8d:3c}
	I0503 21:31:35.518140   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined IP address 192.168.39.58 and MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:35.518351   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHPort
	I0503 21:31:35.518548   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHKeyPath
	I0503 21:31:35.518775   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHKeyPath
	I0503 21:31:35.518938   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHUsername
	I0503 21:31:35.519105   14195 main.go:141] libmachine: Using SSH client type: native
	I0503 21:31:35.519334   14195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0503 21:31:35.519348   14195 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0503 21:31:35.624694   14195 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0503 21:31:35.624772   14195 main.go:141] libmachine: found compatible host: buildroot
	I0503 21:31:35.624783   14195 main.go:141] libmachine: Provisioning with buildroot...
	I0503 21:31:35.624790   14195 main.go:141] libmachine: (addons-146858) Calling .GetMachineName
	I0503 21:31:35.625048   14195 buildroot.go:166] provisioning hostname "addons-146858"
	I0503 21:31:35.625072   14195 main.go:141] libmachine: (addons-146858) Calling .GetMachineName
	I0503 21:31:35.625227   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHHostname
	I0503 21:31:35.627462   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:35.627773   14195 main.go:141] libmachine: (addons-146858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8d:3c", ip: ""} in network mk-addons-146858: {Iface:virbr1 ExpiryTime:2024-05-03 22:31:28 +0000 UTC Type:0 Mac:52:54:00:f7:8d:3c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-146858 Clientid:01:52:54:00:f7:8d:3c}
	I0503 21:31:35.627797   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined IP address 192.168.39.58 and MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:35.627897   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHPort
	I0503 21:31:35.628080   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHKeyPath
	I0503 21:31:35.628227   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHKeyPath
	I0503 21:31:35.628361   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHUsername
	I0503 21:31:35.628527   14195 main.go:141] libmachine: Using SSH client type: native
	I0503 21:31:35.628691   14195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0503 21:31:35.628704   14195 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-146858 && echo "addons-146858" | sudo tee /etc/hostname
	I0503 21:31:35.750833   14195 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-146858
	
	I0503 21:31:35.750858   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHHostname
	I0503 21:31:35.753211   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:35.753595   14195 main.go:141] libmachine: (addons-146858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8d:3c", ip: ""} in network mk-addons-146858: {Iface:virbr1 ExpiryTime:2024-05-03 22:31:28 +0000 UTC Type:0 Mac:52:54:00:f7:8d:3c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-146858 Clientid:01:52:54:00:f7:8d:3c}
	I0503 21:31:35.753619   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined IP address 192.168.39.58 and MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:35.753825   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHPort
	I0503 21:31:35.753980   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHKeyPath
	I0503 21:31:35.754112   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHKeyPath
	I0503 21:31:35.754250   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHUsername
	I0503 21:31:35.754399   14195 main.go:141] libmachine: Using SSH client type: native
	I0503 21:31:35.754590   14195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0503 21:31:35.754607   14195 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-146858' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-146858/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-146858' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0503 21:31:35.865824   14195 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0503 21:31:35.865861   14195 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18793-6010/.minikube CaCertPath:/home/jenkins/minikube-integration/18793-6010/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18793-6010/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18793-6010/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18793-6010/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18793-6010/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18793-6010/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18793-6010/.minikube}
	I0503 21:31:35.865919   14195 buildroot.go:174] setting up certificates
	I0503 21:31:35.865935   14195 provision.go:84] configureAuth start
	I0503 21:31:35.865951   14195 main.go:141] libmachine: (addons-146858) Calling .GetMachineName
	I0503 21:31:35.866197   14195 main.go:141] libmachine: (addons-146858) Calling .GetIP
	I0503 21:31:35.868596   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:35.868941   14195 main.go:141] libmachine: (addons-146858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8d:3c", ip: ""} in network mk-addons-146858: {Iface:virbr1 ExpiryTime:2024-05-03 22:31:28 +0000 UTC Type:0 Mac:52:54:00:f7:8d:3c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-146858 Clientid:01:52:54:00:f7:8d:3c}
	I0503 21:31:35.868964   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined IP address 192.168.39.58 and MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:35.869039   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHHostname
	I0503 21:31:35.871309   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:35.871672   14195 main.go:141] libmachine: (addons-146858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8d:3c", ip: ""} in network mk-addons-146858: {Iface:virbr1 ExpiryTime:2024-05-03 22:31:28 +0000 UTC Type:0 Mac:52:54:00:f7:8d:3c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-146858 Clientid:01:52:54:00:f7:8d:3c}
	I0503 21:31:35.871700   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined IP address 192.168.39.58 and MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:35.871826   14195 provision.go:143] copyHostCerts
	I0503 21:31:35.871896   14195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18793-6010/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18793-6010/.minikube/ca.pem (1078 bytes)
	I0503 21:31:35.872008   14195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18793-6010/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18793-6010/.minikube/cert.pem (1123 bytes)
	I0503 21:31:35.872061   14195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18793-6010/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18793-6010/.minikube/key.pem (1679 bytes)
	I0503 21:31:35.872145   14195 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18793-6010/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18793-6010/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18793-6010/.minikube/certs/ca-key.pem org=jenkins.addons-146858 san=[127.0.0.1 192.168.39.58 addons-146858 localhost minikube]
	I0503 21:31:36.302863   14195 provision.go:177] copyRemoteCerts
	I0503 21:31:36.302915   14195 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0503 21:31:36.302935   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHHostname
	I0503 21:31:36.305432   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:36.305711   14195 main.go:141] libmachine: (addons-146858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8d:3c", ip: ""} in network mk-addons-146858: {Iface:virbr1 ExpiryTime:2024-05-03 22:31:28 +0000 UTC Type:0 Mac:52:54:00:f7:8d:3c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-146858 Clientid:01:52:54:00:f7:8d:3c}
	I0503 21:31:36.305743   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined IP address 192.168.39.58 and MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:36.305839   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHPort
	I0503 21:31:36.306014   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHKeyPath
	I0503 21:31:36.306164   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHUsername
	I0503 21:31:36.306270   14195 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18793-6010/.minikube/machines/addons-146858/id_rsa Username:docker}
	I0503 21:31:36.390903   14195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18793-6010/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0503 21:31:36.416275   14195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18793-6010/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0503 21:31:36.441610   14195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18793-6010/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0503 21:31:36.467695   14195 provision.go:87] duration metric: took 601.746812ms to configureAuth
	I0503 21:31:36.467718   14195 buildroot.go:189] setting minikube options for container-runtime
	I0503 21:31:36.467904   14195 config.go:182] Loaded profile config "addons-146858": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0503 21:31:36.467927   14195 main.go:141] libmachine: Checking connection to Docker...
	I0503 21:31:36.467938   14195 main.go:141] libmachine: (addons-146858) Calling .GetURL
	I0503 21:31:36.469280   14195 main.go:141] libmachine: (addons-146858) DBG | Using libvirt version 6000000
	I0503 21:31:36.471455   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:36.471899   14195 main.go:141] libmachine: (addons-146858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8d:3c", ip: ""} in network mk-addons-146858: {Iface:virbr1 ExpiryTime:2024-05-03 22:31:28 +0000 UTC Type:0 Mac:52:54:00:f7:8d:3c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-146858 Clientid:01:52:54:00:f7:8d:3c}
	I0503 21:31:36.471925   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined IP address 192.168.39.58 and MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:36.472073   14195 main.go:141] libmachine: Docker is up and running!
	I0503 21:31:36.472084   14195 main.go:141] libmachine: Reticulating splines...
	I0503 21:31:36.472091   14195 client.go:171] duration metric: took 25.179493779s to LocalClient.Create
	I0503 21:31:36.472114   14195 start.go:167] duration metric: took 25.179553713s to libmachine.API.Create "addons-146858"
	I0503 21:31:36.472128   14195 start.go:293] postStartSetup for "addons-146858" (driver="kvm2")
	I0503 21:31:36.472141   14195 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0503 21:31:36.472159   14195 main.go:141] libmachine: (addons-146858) Calling .DriverName
	I0503 21:31:36.472411   14195 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0503 21:31:36.472435   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHHostname
	I0503 21:31:36.474312   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:36.474590   14195 main.go:141] libmachine: (addons-146858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8d:3c", ip: ""} in network mk-addons-146858: {Iface:virbr1 ExpiryTime:2024-05-03 22:31:28 +0000 UTC Type:0 Mac:52:54:00:f7:8d:3c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-146858 Clientid:01:52:54:00:f7:8d:3c}
	I0503 21:31:36.474616   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined IP address 192.168.39.58 and MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:36.474778   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHPort
	I0503 21:31:36.474956   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHKeyPath
	I0503 21:31:36.475098   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHUsername
	I0503 21:31:36.475254   14195 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18793-6010/.minikube/machines/addons-146858/id_rsa Username:docker}
	I0503 21:31:36.559616   14195 ssh_runner.go:195] Run: cat /etc/os-release
	I0503 21:31:36.564275   14195 info.go:137] Remote host: Buildroot 2023.02.9
	I0503 21:31:36.564300   14195 filesync.go:126] Scanning /home/jenkins/minikube-integration/18793-6010/.minikube/addons for local assets ...
	I0503 21:31:36.564372   14195 filesync.go:126] Scanning /home/jenkins/minikube-integration/18793-6010/.minikube/files for local assets ...
	I0503 21:31:36.564402   14195 start.go:296] duration metric: took 92.266765ms for postStartSetup
	I0503 21:31:36.564438   14195 main.go:141] libmachine: (addons-146858) Calling .GetConfigRaw
	I0503 21:31:36.565024   14195 main.go:141] libmachine: (addons-146858) Calling .GetIP
	I0503 21:31:36.567671   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:36.568007   14195 main.go:141] libmachine: (addons-146858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8d:3c", ip: ""} in network mk-addons-146858: {Iface:virbr1 ExpiryTime:2024-05-03 22:31:28 +0000 UTC Type:0 Mac:52:54:00:f7:8d:3c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-146858 Clientid:01:52:54:00:f7:8d:3c}
	I0503 21:31:36.568035   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined IP address 192.168.39.58 and MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:36.568268   14195 profile.go:143] Saving config to /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/config.json ...
	I0503 21:31:36.568428   14195 start.go:128] duration metric: took 25.294149211s to createHost
	I0503 21:31:36.568452   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHHostname
	I0503 21:31:36.570631   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:36.570983   14195 main.go:141] libmachine: (addons-146858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8d:3c", ip: ""} in network mk-addons-146858: {Iface:virbr1 ExpiryTime:2024-05-03 22:31:28 +0000 UTC Type:0 Mac:52:54:00:f7:8d:3c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-146858 Clientid:01:52:54:00:f7:8d:3c}
	I0503 21:31:36.571016   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined IP address 192.168.39.58 and MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:36.571121   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHPort
	I0503 21:31:36.571304   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHKeyPath
	I0503 21:31:36.571462   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHKeyPath
	I0503 21:31:36.571579   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHUsername
	I0503 21:31:36.571751   14195 main.go:141] libmachine: Using SSH client type: native
	I0503 21:31:36.571898   14195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0503 21:31:36.571909   14195 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0503 21:31:36.672729   14195 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714771896.640606903
	
	I0503 21:31:36.672751   14195 fix.go:216] guest clock: 1714771896.640606903
	I0503 21:31:36.672760   14195 fix.go:229] Guest: 2024-05-03 21:31:36.640606903 +0000 UTC Remote: 2024-05-03 21:31:36.568440423 +0000 UTC m=+25.402367006 (delta=72.16648ms)
	I0503 21:31:36.672782   14195 fix.go:200] guest clock delta is within tolerance: 72.16648ms
	I0503 21:31:36.672789   14195 start.go:83] releasing machines lock for "addons-146858", held for 25.398597197s
	I0503 21:31:36.672814   14195 main.go:141] libmachine: (addons-146858) Calling .DriverName
	I0503 21:31:36.673093   14195 main.go:141] libmachine: (addons-146858) Calling .GetIP
	I0503 21:31:36.675450   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:36.675791   14195 main.go:141] libmachine: (addons-146858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8d:3c", ip: ""} in network mk-addons-146858: {Iface:virbr1 ExpiryTime:2024-05-03 22:31:28 +0000 UTC Type:0 Mac:52:54:00:f7:8d:3c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-146858 Clientid:01:52:54:00:f7:8d:3c}
	I0503 21:31:36.675818   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined IP address 192.168.39.58 and MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:36.675926   14195 main.go:141] libmachine: (addons-146858) Calling .DriverName
	I0503 21:31:36.676428   14195 main.go:141] libmachine: (addons-146858) Calling .DriverName
	I0503 21:31:36.676608   14195 main.go:141] libmachine: (addons-146858) Calling .DriverName
	I0503 21:31:36.676696   14195 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0503 21:31:36.676745   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHHostname
	I0503 21:31:36.676806   14195 ssh_runner.go:195] Run: cat /version.json
	I0503 21:31:36.676830   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHHostname
	I0503 21:31:36.679182   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:36.679308   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:36.679489   14195 main.go:141] libmachine: (addons-146858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8d:3c", ip: ""} in network mk-addons-146858: {Iface:virbr1 ExpiryTime:2024-05-03 22:31:28 +0000 UTC Type:0 Mac:52:54:00:f7:8d:3c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-146858 Clientid:01:52:54:00:f7:8d:3c}
	I0503 21:31:36.679524   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined IP address 192.168.39.58 and MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:36.679689   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHPort
	I0503 21:31:36.679766   14195 main.go:141] libmachine: (addons-146858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8d:3c", ip: ""} in network mk-addons-146858: {Iface:virbr1 ExpiryTime:2024-05-03 22:31:28 +0000 UTC Type:0 Mac:52:54:00:f7:8d:3c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-146858 Clientid:01:52:54:00:f7:8d:3c}
	I0503 21:31:36.679804   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined IP address 192.168.39.58 and MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:36.679858   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHKeyPath
	I0503 21:31:36.679986   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHPort
	I0503 21:31:36.680007   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHUsername
	I0503 21:31:36.680163   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHKeyPath
	I0503 21:31:36.680198   14195 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18793-6010/.minikube/machines/addons-146858/id_rsa Username:docker}
	I0503 21:31:36.680294   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHUsername
	I0503 21:31:36.680433   14195 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18793-6010/.minikube/machines/addons-146858/id_rsa Username:docker}
	I0503 21:31:36.790528   14195 ssh_runner.go:195] Run: systemctl --version
	I0503 21:31:36.797051   14195 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0503 21:31:36.803220   14195 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0503 21:31:36.803309   14195 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0503 21:31:36.821297   14195 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0503 21:31:36.821314   14195 start.go:494] detecting cgroup driver to use...
	I0503 21:31:36.821372   14195 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0503 21:31:36.859431   14195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0503 21:31:36.875755   14195 docker.go:217] disabling cri-docker service (if available) ...
	I0503 21:31:36.875814   14195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0503 21:31:36.891076   14195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0503 21:31:36.906038   14195 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0503 21:31:37.029212   14195 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0503 21:31:37.168315   14195 docker.go:233] disabling docker service ...
	I0503 21:31:37.168390   14195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0503 21:31:37.183840   14195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0503 21:31:37.197903   14195 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0503 21:31:37.334276   14195 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0503 21:31:37.450577   14195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0503 21:31:37.464833   14195 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0503 21:31:37.486280   14195 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0503 21:31:37.497439   14195 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0503 21:31:37.508341   14195 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0503 21:31:37.508406   14195 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0503 21:31:37.519356   14195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0503 21:31:37.530449   14195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0503 21:31:37.541544   14195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0503 21:31:37.552540   14195 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0503 21:31:37.563693   14195 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0503 21:31:37.574957   14195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0503 21:31:37.586421   14195 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0503 21:31:37.598518   14195 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0503 21:31:37.608526   14195 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0503 21:31:37.608570   14195 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0503 21:31:37.622075   14195 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0503 21:31:37.631959   14195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0503 21:31:37.771153   14195 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0503 21:31:37.802729   14195 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0503 21:31:37.802837   14195 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0503 21:31:37.807641   14195 retry.go:31] will retry after 1.464084544s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0503 21:31:39.273278   14195 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0503 21:31:39.279243   14195 start.go:562] Will wait 60s for crictl version
	I0503 21:31:39.279310   14195 ssh_runner.go:195] Run: which crictl
	I0503 21:31:39.283794   14195 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0503 21:31:39.322214   14195 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.15
	RuntimeApiVersion:  v1
	I0503 21:31:39.322330   14195 ssh_runner.go:195] Run: containerd --version
	I0503 21:31:39.358463   14195 ssh_runner.go:195] Run: containerd --version
	I0503 21:31:39.393414   14195 out.go:177] * Preparing Kubernetes v1.30.0 on containerd 1.7.15 ...
	I0503 21:31:39.394887   14195 main.go:141] libmachine: (addons-146858) Calling .GetIP
	I0503 21:31:39.397477   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:39.397684   14195 main.go:141] libmachine: (addons-146858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8d:3c", ip: ""} in network mk-addons-146858: {Iface:virbr1 ExpiryTime:2024-05-03 22:31:28 +0000 UTC Type:0 Mac:52:54:00:f7:8d:3c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-146858 Clientid:01:52:54:00:f7:8d:3c}
	I0503 21:31:39.397704   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined IP address 192.168.39.58 and MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:31:39.397889   14195 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0503 21:31:39.402978   14195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0503 21:31:39.417218   14195 kubeadm.go:877] updating cluster {Name:addons-146858 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:addons-146858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0503 21:31:39.417363   14195 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime containerd
	I0503 21:31:39.417421   14195 ssh_runner.go:195] Run: sudo crictl images --output json
	I0503 21:31:39.451140   14195 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0503 21:31:39.451212   14195 ssh_runner.go:195] Run: which lz4
	I0503 21:31:39.455645   14195 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0503 21:31:39.460440   14195 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0503 21:31:39.460468   14195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18793-6010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (393937158 bytes)
	I0503 21:31:40.931377   14195 containerd.go:563] duration metric: took 1.475766172s to copy over tarball
	I0503 21:31:40.931480   14195 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0503 21:31:43.549136   14195 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.617624767s)
	I0503 21:31:43.549166   14195 containerd.go:570] duration metric: took 2.617763266s to extract the tarball
	I0503 21:31:43.549174   14195 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0503 21:31:43.588404   14195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0503 21:31:43.705662   14195 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0503 21:31:43.730408   14195 ssh_runner.go:195] Run: sudo crictl images --output json
	I0503 21:31:43.782160   14195 retry.go:31] will retry after 293.676886ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-03T21:31:43Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0503 21:31:44.076728   14195 ssh_runner.go:195] Run: sudo crictl images --output json
	I0503 21:31:44.116195   14195 containerd.go:627] all images are preloaded for containerd runtime.
	I0503 21:31:44.116217   14195 cache_images.go:84] Images are preloaded, skipping loading
	I0503 21:31:44.116225   14195 kubeadm.go:928] updating node { 192.168.39.58 8443 v1.30.0 containerd true true} ...
	I0503 21:31:44.116333   14195 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-146858 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:addons-146858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0503 21:31:44.116386   14195 ssh_runner.go:195] Run: sudo crictl info
	I0503 21:31:44.157467   14195 cni.go:84] Creating CNI manager for ""
	I0503 21:31:44.157491   14195 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0503 21:31:44.157502   14195 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0503 21:31:44.157520   14195 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.58 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-146858 NodeName:addons-146858 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0503 21:31:44.157648   14195 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-146858"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.58
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.58"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0503 21:31:44.157710   14195 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0503 21:31:44.171364   14195 binaries.go:44] Found k8s binaries, skipping transfer
	I0503 21:31:44.171437   14195 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0503 21:31:44.184354   14195 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0503 21:31:44.203454   14195 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0503 21:31:44.222021   14195 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2170 bytes)
	I0503 21:31:44.240456   14195 ssh_runner.go:195] Run: grep 192.168.39.58	control-plane.minikube.internal$ /etc/hosts
	I0503 21:31:44.244754   14195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0503 21:31:44.258985   14195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0503 21:31:44.380315   14195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0503 21:31:44.403278   14195 certs.go:68] Setting up /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858 for IP: 192.168.39.58
	I0503 21:31:44.403318   14195 certs.go:194] generating shared ca certs ...
	I0503 21:31:44.403337   14195 certs.go:226] acquiring lock for ca certs: {Name:mk5c80d8c3ed2dcb84b0b48ba22527b05e0a9cc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 21:31:44.403485   14195 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18793-6010/.minikube/ca.key
	I0503 21:31:44.504303   14195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18793-6010/.minikube/ca.crt ...
	I0503 21:31:44.504331   14195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18793-6010/.minikube/ca.crt: {Name:mk0a0c6d92218ac65689e9f74307a1f01459780b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 21:31:44.504491   14195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18793-6010/.minikube/ca.key ...
	I0503 21:31:44.504504   14195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18793-6010/.minikube/ca.key: {Name:mk6af73434fb57b0eb6520e9ce55a3a37d8408dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 21:31:44.504578   14195 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18793-6010/.minikube/proxy-client-ca.key
	I0503 21:31:44.551563   14195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18793-6010/.minikube/proxy-client-ca.crt ...
	I0503 21:31:44.551604   14195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18793-6010/.minikube/proxy-client-ca.crt: {Name:mk8d221e3d7d06e7d725fcb31f4a551fb48ad4f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 21:31:44.551800   14195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18793-6010/.minikube/proxy-client-ca.key ...
	I0503 21:31:44.551818   14195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18793-6010/.minikube/proxy-client-ca.key: {Name:mkf073e85049c3af6aa1754c2a39b792e7907ab3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 21:31:44.551908   14195 certs.go:256] generating profile certs ...
	I0503 21:31:44.551961   14195 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/client.key
	I0503 21:31:44.551975   14195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/client.crt with IP's: []
	I0503 21:31:44.642227   14195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/client.crt ...
	I0503 21:31:44.642256   14195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/client.crt: {Name:mkf79271fa49ae34d3285240d3f6d895652a8d7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 21:31:44.642428   14195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/client.key ...
	I0503 21:31:44.642446   14195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/client.key: {Name:mkcef23f1369a890ec536dcc9009c0b89a38a8bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 21:31:44.642538   14195 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/apiserver.key.364bf659
	I0503 21:31:44.642565   14195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/apiserver.crt.364bf659 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.58]
	I0503 21:31:44.705976   14195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/apiserver.crt.364bf659 ...
	I0503 21:31:44.706004   14195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/apiserver.crt.364bf659: {Name:mkeb8d76b1e8db79bb14f3c745db26fc06bdb559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 21:31:44.706174   14195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/apiserver.key.364bf659 ...
	I0503 21:31:44.706193   14195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/apiserver.key.364bf659: {Name:mkf65061db28ed9a63a8ae41c9dcbf2576c90da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 21:31:44.706284   14195 certs.go:381] copying /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/apiserver.crt.364bf659 -> /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/apiserver.crt
	I0503 21:31:44.706371   14195 certs.go:385] copying /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/apiserver.key.364bf659 -> /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/apiserver.key
	I0503 21:31:44.706439   14195 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/proxy-client.key
	I0503 21:31:44.706463   14195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/proxy-client.crt with IP's: []
	I0503 21:31:44.831585   14195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/proxy-client.crt ...
	I0503 21:31:44.831612   14195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/proxy-client.crt: {Name:mkb1222408dfefe42ccbaa55b82653c4070fd894 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 21:31:44.831794   14195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/proxy-client.key ...
	I0503 21:31:44.831812   14195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/proxy-client.key: {Name:mkb149522503a8b8421c1d6190ab209919126cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 21:31:44.832024   14195 certs.go:484] found cert: /home/jenkins/minikube-integration/18793-6010/.minikube/certs/ca-key.pem (1675 bytes)
	I0503 21:31:44.832071   14195 certs.go:484] found cert: /home/jenkins/minikube-integration/18793-6010/.minikube/certs/ca.pem (1078 bytes)
	I0503 21:31:44.832099   14195 certs.go:484] found cert: /home/jenkins/minikube-integration/18793-6010/.minikube/certs/cert.pem (1123 bytes)
	I0503 21:31:44.832133   14195 certs.go:484] found cert: /home/jenkins/minikube-integration/18793-6010/.minikube/certs/key.pem (1679 bytes)
	I0503 21:31:44.832681   14195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18793-6010/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0503 21:31:44.861483   14195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18793-6010/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0503 21:31:44.887336   14195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18793-6010/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0503 21:31:44.912930   14195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18793-6010/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0503 21:31:44.938806   14195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0503 21:31:44.965450   14195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0503 21:31:44.991771   14195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0503 21:31:45.018155   14195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0503 21:31:45.044605   14195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18793-6010/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0503 21:31:45.070569   14195 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0503 21:31:45.098637   14195 ssh_runner.go:195] Run: openssl version
	I0503 21:31:45.106045   14195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0503 21:31:45.121157   14195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0503 21:31:45.129157   14195 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  3 21:31 /usr/share/ca-certificates/minikubeCA.pem
	I0503 21:31:45.129217   14195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0503 21:31:45.140103   14195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0503 21:31:45.158767   14195 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0503 21:31:45.163585   14195 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0503 21:31:45.163639   14195 kubeadm.go:391] StartCluster: {Name:addons-146858 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 C
lusterName:addons-146858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 21:31:45.163749   14195 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0503 21:31:45.163800   14195 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0503 21:31:45.204215   14195 cri.go:89] found id: ""
	I0503 21:31:45.204298   14195 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0503 21:31:45.216845   14195 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0503 21:31:45.228250   14195 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0503 21:31:45.239269   14195 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0503 21:31:45.239302   14195 kubeadm.go:156] found existing configuration files:
	
	I0503 21:31:45.239345   14195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0503 21:31:45.249898   14195 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0503 21:31:45.249957   14195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0503 21:31:45.261147   14195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0503 21:31:45.272173   14195 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0503 21:31:45.272236   14195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0503 21:31:45.283203   14195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0503 21:31:45.294511   14195 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0503 21:31:45.294578   14195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0503 21:31:45.305830   14195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0503 21:31:45.316799   14195 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0503 21:31:45.316861   14195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0503 21:31:45.327460   14195 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0503 21:31:45.507446   14195 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0503 21:31:55.849253   14195 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0503 21:31:55.849313   14195 kubeadm.go:309] [preflight] Running pre-flight checks
	I0503 21:31:55.849379   14195 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0503 21:31:55.849490   14195 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0503 21:31:55.849646   14195 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0503 21:31:55.849728   14195 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0503 21:31:55.851770   14195 out.go:204]   - Generating certificates and keys ...
	I0503 21:31:55.851855   14195 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0503 21:31:55.851923   14195 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0503 21:31:55.852004   14195 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0503 21:31:55.852099   14195 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0503 21:31:55.852180   14195 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0503 21:31:55.852239   14195 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0503 21:31:55.852300   14195 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0503 21:31:55.852464   14195 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-146858 localhost] and IPs [192.168.39.58 127.0.0.1 ::1]
	I0503 21:31:55.852548   14195 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0503 21:31:55.852693   14195 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-146858 localhost] and IPs [192.168.39.58 127.0.0.1 ::1]
	I0503 21:31:55.852749   14195 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0503 21:31:55.852835   14195 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0503 21:31:55.852903   14195 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0503 21:31:55.852987   14195 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0503 21:31:55.853061   14195 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0503 21:31:55.853141   14195 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0503 21:31:55.853220   14195 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0503 21:31:55.853317   14195 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0503 21:31:55.853401   14195 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0503 21:31:55.853513   14195 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0503 21:31:55.853576   14195 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0503 21:31:55.855040   14195 out.go:204]   - Booting up control plane ...
	I0503 21:31:55.855127   14195 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0503 21:31:55.855207   14195 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0503 21:31:55.855276   14195 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0503 21:31:55.855373   14195 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0503 21:31:55.855453   14195 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0503 21:31:55.855487   14195 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0503 21:31:55.855595   14195 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0503 21:31:55.855669   14195 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0503 21:31:55.855759   14195 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002509471s
	I0503 21:31:55.855815   14195 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0503 21:31:55.855869   14195 kubeadm.go:309] [api-check] The API server is healthy after 5.003259342s
	I0503 21:31:55.855970   14195 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0503 21:31:55.856094   14195 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0503 21:31:55.856144   14195 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0503 21:31:55.856285   14195 kubeadm.go:309] [mark-control-plane] Marking the node addons-146858 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0503 21:31:55.856333   14195 kubeadm.go:309] [bootstrap-token] Using token: 1otcea.c71itnf7mnokpxz6
	I0503 21:31:55.857824   14195 out.go:204]   - Configuring RBAC rules ...
	I0503 21:31:55.857915   14195 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0503 21:31:55.857983   14195 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0503 21:31:55.858119   14195 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0503 21:31:55.858254   14195 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0503 21:31:55.858353   14195 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0503 21:31:55.858432   14195 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0503 21:31:55.858526   14195 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0503 21:31:55.858561   14195 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0503 21:31:55.858607   14195 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0503 21:31:55.858618   14195 kubeadm.go:309] 
	I0503 21:31:55.858672   14195 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0503 21:31:55.858678   14195 kubeadm.go:309] 
	I0503 21:31:55.858758   14195 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0503 21:31:55.858767   14195 kubeadm.go:309] 
	I0503 21:31:55.858807   14195 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0503 21:31:55.858873   14195 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0503 21:31:55.858948   14195 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0503 21:31:55.858957   14195 kubeadm.go:309] 
	I0503 21:31:55.859039   14195 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0503 21:31:55.859048   14195 kubeadm.go:309] 
	I0503 21:31:55.859119   14195 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0503 21:31:55.859128   14195 kubeadm.go:309] 
	I0503 21:31:55.859208   14195 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0503 21:31:55.859322   14195 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0503 21:31:55.859412   14195 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0503 21:31:55.859423   14195 kubeadm.go:309] 
	I0503 21:31:55.859490   14195 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0503 21:31:55.859549   14195 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0503 21:31:55.859555   14195 kubeadm.go:309] 
	I0503 21:31:55.859621   14195 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 1otcea.c71itnf7mnokpxz6 \
	I0503 21:31:55.859739   14195 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5809ee47c5f6c1c5bf06353372fec59ed390e07a45c82b135f98fc0fdbcb8aa3 \
	I0503 21:31:55.859764   14195 kubeadm.go:309] 	--control-plane 
	I0503 21:31:55.859770   14195 kubeadm.go:309] 
	I0503 21:31:55.859838   14195 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0503 21:31:55.859844   14195 kubeadm.go:309] 
	I0503 21:31:55.859910   14195 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 1otcea.c71itnf7mnokpxz6 \
	I0503 21:31:55.860010   14195 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5809ee47c5f6c1c5bf06353372fec59ed390e07a45c82b135f98fc0fdbcb8aa3 
	I0503 21:31:55.860022   14195 cni.go:84] Creating CNI manager for ""
	I0503 21:31:55.860028   14195 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0503 21:31:55.861568   14195 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0503 21:31:55.862782   14195 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0503 21:31:55.876978   14195 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0503 21:31:55.896708   14195 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0503 21:31:55.896798   14195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0503 21:31:55.896844   14195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-146858 minikube.k8s.io/updated_at=2024_05_03T21_31_55_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=cc00050a34cebd4ea4e95f76540d25d17abab09a minikube.k8s.io/name=addons-146858 minikube.k8s.io/primary=true
	I0503 21:31:55.910510   14195 ops.go:34] apiserver oom_adj: -16
	I0503 21:31:56.051777   14195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0503 21:31:56.551823   14195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0503 21:31:57.052618   14195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0503 21:31:57.552679   14195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0503 21:31:58.052314   14195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0503 21:31:58.551895   14195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0503 21:31:59.052103   14195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0503 21:31:59.552017   14195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0503 21:32:00.052836   14195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0503 21:32:00.552150   14195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0503 21:32:01.051870   14195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0503 21:32:01.552279   14195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0503 21:32:02.052024   14195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0503 21:32:02.552858   14195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0503 21:32:03.051820   14195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0503 21:32:03.552627   14195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0503 21:32:04.052026   14195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0503 21:32:04.552685   14195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0503 21:32:05.051971   14195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0503 21:32:05.552194   14195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0503 21:32:06.052718   14195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0503 21:32:06.552477   14195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0503 21:32:07.052341   14195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0503 21:32:07.552749   14195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0503 21:32:08.052867   14195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0503 21:32:08.552892   14195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0503 21:32:08.672584   14195 kubeadm.go:1107] duration metric: took 12.775841687s to wait for elevateKubeSystemPrivileges
	W0503 21:32:08.672634   14195 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0503 21:32:08.672645   14195 kubeadm.go:393] duration metric: took 23.509009871s to StartCluster
	I0503 21:32:08.672668   14195 settings.go:142] acquiring lock: {Name:mk7452ff60d58527e5677c754c173835c0ea2c83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 21:32:08.672798   14195 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18793-6010/kubeconfig
	I0503 21:32:08.673281   14195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18793-6010/kubeconfig: {Name:mk8ab715dbe3ab4cc18e4d5d6884d8774646361f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 21:32:08.674008   14195 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0503 21:32:08.674042   14195 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0503 21:32:08.675735   14195 out.go:177] * Verifying Kubernetes components...
	I0503 21:32:08.674123   14195 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0503 21:32:08.674261   14195 config.go:182] Loaded profile config "addons-146858": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0503 21:32:08.677717   14195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0503 21:32:08.677773   14195 addons.go:69] Setting default-storageclass=true in profile "addons-146858"
	I0503 21:32:08.677800   14195 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-146858"
	I0503 21:32:08.677827   14195 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-146858"
	I0503 21:32:08.677846   14195 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-146858"
	I0503 21:32:08.677768   14195 addons.go:69] Setting yakd=true in profile "addons-146858"
	I0503 21:32:08.677886   14195 host.go:66] Checking if "addons-146858" exists ...
	I0503 21:32:08.677913   14195 addons.go:234] Setting addon yakd=true in "addons-146858"
	I0503 21:32:08.677770   14195 addons.go:69] Setting ingress=true in profile "addons-146858"
	I0503 21:32:08.677979   14195 host.go:66] Checking if "addons-146858" exists ...
	I0503 21:32:08.677988   14195 addons.go:234] Setting addon ingress=true in "addons-146858"
	I0503 21:32:08.678030   14195 host.go:66] Checking if "addons-146858" exists ...
	I0503 21:32:08.677804   14195 addons.go:69] Setting registry=true in profile "addons-146858"
	I0503 21:32:08.678081   14195 addons.go:234] Setting addon registry=true in "addons-146858"
	I0503 21:32:08.678115   14195 host.go:66] Checking if "addons-146858" exists ...
	I0503 21:32:08.677786   14195 addons.go:69] Setting inspektor-gadget=true in profile "addons-146858"
	I0503 21:32:08.678158   14195 addons.go:234] Setting addon inspektor-gadget=true in "addons-146858"
	I0503 21:32:08.678188   14195 host.go:66] Checking if "addons-146858" exists ...
	I0503 21:32:08.678335   14195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:32:08.678335   14195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:32:08.678356   14195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:32:08.678371   14195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:32:08.678376   14195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:32:08.678383   14195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:32:08.677785   14195 addons.go:69] Setting gcp-auth=true in profile "addons-146858"
	I0503 21:32:08.677780   14195 addons.go:69] Setting ingress-dns=true in profile "addons-146858"
	I0503 21:32:08.678406   14195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:32:08.678416   14195 mustload.go:65] Loading cluster: addons-146858
	I0503 21:32:08.678417   14195 addons.go:234] Setting addon ingress-dns=true in "addons-146858"
	I0503 21:32:08.677796   14195 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-146858"
	I0503 21:32:08.677790   14195 addons.go:69] Setting metrics-server=true in profile "addons-146858"
	I0503 21:32:08.677793   14195 addons.go:69] Setting cloud-spanner=true in profile "addons-146858"
	I0503 21:32:08.678467   14195 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-146858"
	I0503 21:32:08.677802   14195 addons.go:69] Setting storage-provisioner=true in profile "addons-146858"
	I0503 21:32:08.678475   14195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:32:08.678481   14195 addons.go:234] Setting addon cloud-spanner=true in "addons-146858"
	I0503 21:32:08.677795   14195 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-146858"
	I0503 21:32:08.678495   14195 addons.go:234] Setting addon storage-provisioner=true in "addons-146858"
	I0503 21:32:08.678454   14195 addons.go:234] Setting addon metrics-server=true in "addons-146858"
	I0503 21:32:08.678503   14195 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-146858"
	I0503 21:32:08.677805   14195 addons.go:69] Setting volumesnapshots=true in profile "addons-146858"
	I0503 21:32:08.677786   14195 addons.go:69] Setting helm-tiller=true in profile "addons-146858"
	I0503 21:32:08.678532   14195 addons.go:234] Setting addon volumesnapshots=true in "addons-146858"
	I0503 21:32:08.678545   14195 addons.go:234] Setting addon helm-tiller=true in "addons-146858"
	I0503 21:32:08.678614   14195 config.go:182] Loaded profile config "addons-146858": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0503 21:32:08.678649   14195 host.go:66] Checking if "addons-146858" exists ...
	I0503 21:32:08.678679   14195 host.go:66] Checking if "addons-146858" exists ...
	I0503 21:32:08.678723   14195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:32:08.678753   14195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:32:08.678815   14195 host.go:66] Checking if "addons-146858" exists ...
	I0503 21:32:08.679009   14195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:32:08.679027   14195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:32:08.679029   14195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:32:08.679036   14195 host.go:66] Checking if "addons-146858" exists ...
	I0503 21:32:08.679055   14195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:32:08.679089   14195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:32:08.679108   14195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:32:08.679114   14195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:32:08.679126   14195 host.go:66] Checking if "addons-146858" exists ...
	I0503 21:32:08.679136   14195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:32:08.679185   14195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:32:08.679205   14195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:32:08.679298   14195 host.go:66] Checking if "addons-146858" exists ...
	I0503 21:32:08.679327   14195 host.go:66] Checking if "addons-146858" exists ...
	I0503 21:32:08.679339   14195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:32:08.679367   14195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:32:08.679612   14195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:32:08.679687   14195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:32:08.679708   14195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:32:08.679735   14195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:32:08.679795   14195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:32:08.679813   14195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:32:08.699775   14195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45659
	I0503 21:32:08.699793   14195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40021
	I0503 21:32:08.699776   14195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40377
	I0503 21:32:08.699776   14195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44229
	I0503 21:32:08.700311   14195 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:32:08.700325   14195 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:32:08.700543   14195 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:32:08.700863   14195 main.go:141] libmachine: Using API Version  1
	I0503 21:32:08.700870   14195 main.go:141] libmachine: Using API Version  1
	I0503 21:32:08.700883   14195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:32:08.700887   14195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:32:08.700979   14195 main.go:141] libmachine: Using API Version  1
	I0503 21:32:08.701001   14195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:32:08.701203   14195 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:32:08.701326   14195 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:32:08.701683   14195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:32:08.701716   14195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:32:08.701819   14195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:32:08.701849   14195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:32:08.702003   14195 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:32:08.704265   14195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:32:08.704302   14195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:32:08.718547   14195 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:32:08.718548   14195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:32:08.718631   14195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:32:08.719775   14195 main.go:141] libmachine: Using API Version  1
	I0503 21:32:08.719797   14195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:32:08.720312   14195 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:32:08.720922   14195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:32:08.720969   14195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:32:08.723360   14195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46157
	I0503 21:32:08.727383   14195 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:32:08.732485   14195 main.go:141] libmachine: Using API Version  1
	I0503 21:32:08.732510   14195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:32:08.732976   14195 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:32:08.734212   14195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:32:08.734248   14195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:32:08.739902   14195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38697
	I0503 21:32:08.740332   14195 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:32:08.740885   14195 main.go:141] libmachine: Using API Version  1
	I0503 21:32:08.740910   14195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:32:08.741389   14195 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:32:08.741652   14195 main.go:141] libmachine: (addons-146858) Calling .GetState
	I0503 21:32:08.742247   14195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38151
	I0503 21:32:08.742619   14195 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:32:08.743165   14195 main.go:141] libmachine: Using API Version  1
	I0503 21:32:08.743187   14195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:32:08.743498   14195 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:32:08.743548   14195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45029
	I0503 21:32:08.743690   14195 main.go:141] libmachine: (addons-146858) Calling .GetState
	I0503 21:32:08.744393   14195 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:32:08.744879   14195 main.go:141] libmachine: Using API Version  1
	I0503 21:32:08.744903   14195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:32:08.745283   14195 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:32:08.745476   14195 main.go:141] libmachine: (addons-146858) Calling .GetState
	I0503 21:32:08.745752   14195 host.go:66] Checking if "addons-146858" exists ...
	I0503 21:32:08.746147   14195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:32:08.746176   14195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:32:08.746731   14195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38297
	I0503 21:32:08.747954   14195 main.go:141] libmachine: (addons-146858) Calling .DriverName
	I0503 21:32:08.747967   14195 main.go:141] libmachine: (addons-146858) Calling .DriverName
	I0503 21:32:08.748027   14195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37199
	I0503 21:32:08.748049   14195 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:32:08.750777   14195 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0503 21:32:08.748641   14195 main.go:141] libmachine: Using API Version  1
	I0503 21:32:08.749011   14195 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:32:08.750040   14195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35259
	I0503 21:32:08.750627   14195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35693
	I0503 21:32:08.751999   14195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41791
	I0503 21:32:08.753022   14195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38127
	I0503 21:32:08.753044   14195 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0503 21:32:08.753065   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0503 21:32:08.753095   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHHostname
	I0503 21:32:08.753114   14195 out.go:177]   - Using image docker.io/registry:2.8.3
	I0503 21:32:08.753214   14195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:32:08.753465   14195 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:32:08.753491   14195 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:32:08.753693   14195 main.go:141] libmachine: Using API Version  1
	I0503 21:32:08.753791   14195 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:32:08.755532   14195 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:32:08.755758   14195 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0503 21:32:08.757444   14195 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0503 21:32:08.757462   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0503 21:32:08.757467   14195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34481
	I0503 21:32:08.757481   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHHostname
	I0503 21:32:08.755879   14195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:32:08.756207   14195 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:32:08.756335   14195 main.go:141] libmachine: Using API Version  1
	I0503 21:32:08.756342   14195 main.go:141] libmachine: Using API Version  1
	I0503 21:32:08.757609   14195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:32:08.757156   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:32:08.757585   14195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:32:08.757653   14195 main.go:141] libmachine: (addons-146858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8d:3c", ip: ""} in network mk-addons-146858: {Iface:virbr1 ExpiryTime:2024-05-03 22:31:28 +0000 UTC Type:0 Mac:52:54:00:f7:8d:3c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-146858 Clientid:01:52:54:00:f7:8d:3c}
	I0503 21:32:08.757690   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined IP address 192.168.39.58 and MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:32:08.757866   14195 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:32:08.757971   14195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37131
	I0503 21:32:08.758052   14195 main.go:141] libmachine: Using API Version  1
	I0503 21:32:08.758065   14195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:32:08.758094   14195 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:32:08.758122   14195 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:32:08.758239   14195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:32:08.758331   14195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:32:08.758523   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHPort
	I0503 21:32:08.758508   14195 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:32:08.758828   14195 main.go:141] libmachine: Using API Version  1
	I0503 21:32:08.758847   14195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:32:08.758893   14195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:32:08.758927   14195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:32:08.759107   14195 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:32:08.759191   14195 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:32:08.759228   14195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:32:08.758898   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHKeyPath
	I0503 21:32:08.759256   14195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:32:08.759476   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHUsername
	I0503 21:32:08.759705   14195 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18793-6010/.minikube/machines/addons-146858/id_rsa Username:docker}
	I0503 21:32:08.760087   14195 main.go:141] libmachine: Using API Version  1
	I0503 21:32:08.760099   14195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:32:08.760219   14195 main.go:141] libmachine: Using API Version  1
	I0503 21:32:08.760230   14195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:32:08.760787   14195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:32:08.760821   14195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:32:08.761318   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:32:08.761326   14195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34075
	I0503 21:32:08.761363   14195 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:32:08.761422   14195 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:32:08.761472   14195 main.go:141] libmachine: (addons-146858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8d:3c", ip: ""} in network mk-addons-146858: {Iface:virbr1 ExpiryTime:2024-05-03 22:31:28 +0000 UTC Type:0 Mac:52:54:00:f7:8d:3c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-146858 Clientid:01:52:54:00:f7:8d:3c}
	I0503 21:32:08.761492   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined IP address 192.168.39.58 and MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:32:08.761672   14195 main.go:141] libmachine: (addons-146858) Calling .GetState
	I0503 21:32:08.761779   14195 main.go:141] libmachine: (addons-146858) Calling .GetState
	I0503 21:32:08.761830   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHPort
	I0503 21:32:08.761903   14195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:32:08.761936   14195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:32:08.762185   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHKeyPath
	I0503 21:32:08.762316   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHUsername
	I0503 21:32:08.762422   14195 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18793-6010/.minikube/machines/addons-146858/id_rsa Username:docker}
	I0503 21:32:08.762906   14195 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:32:08.763524   14195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:32:08.763558   14195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:32:08.763795   14195 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:32:08.764216   14195 main.go:141] libmachine: Using API Version  1
	I0503 21:32:08.764233   14195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:32:08.764588   14195 main.go:141] libmachine: (addons-146858) Calling .DriverName
	I0503 21:32:08.764597   14195 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:32:08.767872   14195 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0503 21:32:08.765128   14195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:32:08.766523   14195 addons.go:234] Setting addon default-storageclass=true in "addons-146858"
	I0503 21:32:08.770195   14195 host.go:66] Checking if "addons-146858" exists ...
	I0503 21:32:08.770554   14195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:32:08.770584   14195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:32:08.772278   14195 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0503 21:32:08.770855   14195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:32:08.775336   14195 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0503 21:32:08.777179   14195 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0503 21:32:08.778164   14195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38743
	I0503 21:32:08.778536   14195 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0503 21:32:08.778969   14195 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:32:08.780092   14195 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0503 21:32:08.780774   14195 main.go:141] libmachine: Using API Version  1
	I0503 21:32:08.781907   14195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:32:08.781975   14195 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0503 21:32:08.782319   14195 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:32:08.783867   14195 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0503 21:32:08.785478   14195 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0503 21:32:08.785497   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0503 21:32:08.785525   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHHostname
	I0503 21:32:08.784024   14195 main.go:141] libmachine: (addons-146858) Calling .GetState
	I0503 21:32:08.784166   14195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39281
	I0503 21:32:08.786022   14195 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:32:08.786581   14195 main.go:141] libmachine: Using API Version  1
	I0503 21:32:08.786597   14195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:32:08.786655   14195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42487
	I0503 21:32:08.788620   14195 main.go:141] libmachine: (addons-146858) Calling .DriverName
	I0503 21:32:08.788891   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:32:08.791093   14195 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0503 21:32:08.789279   14195 main.go:141] libmachine: (addons-146858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8d:3c", ip: ""} in network mk-addons-146858: {Iface:virbr1 ExpiryTime:2024-05-03 22:31:28 +0000 UTC Type:0 Mac:52:54:00:f7:8d:3c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-146858 Clientid:01:52:54:00:f7:8d:3c}
	I0503 21:32:08.789331   14195 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:32:08.789367   14195 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:32:08.789535   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHPort
	I0503 21:32:08.791207   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined IP address 192.168.39.58 and MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:32:08.791352   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHKeyPath
	I0503 21:32:08.791525   14195 main.go:141] libmachine: (addons-146858) Calling .GetState
	I0503 21:32:08.792262   14195 main.go:141] libmachine: Using API Version  1
	I0503 21:32:08.792926   14195 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0503 21:32:08.792944   14195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:32:08.793124   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHUsername
	I0503 21:32:08.794660   14195 main.go:141] libmachine: (addons-146858) Calling .DriverName
	I0503 21:32:08.795196   14195 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0503 21:32:08.797251   14195 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0503 21:32:08.795608   14195 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:32:08.796119   14195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34989
	I0503 21:32:08.796528   14195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35245
	I0503 21:32:08.796648   14195 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18793-6010/.minikube/machines/addons-146858/id_rsa Username:docker}
	I0503 21:32:08.797349   14195 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0503 21:32:08.798698   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0503 21:32:08.798720   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHHostname
	I0503 21:32:08.798771   14195 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0503 21:32:08.798787   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0503 21:32:08.798803   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHHostname
	I0503 21:32:08.799437   14195 main.go:141] libmachine: (addons-146858) Calling .GetState
	I0503 21:32:08.799507   14195 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:32:08.800126   14195 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:32:08.800309   14195 main.go:141] libmachine: Using API Version  1
	I0503 21:32:08.800321   14195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:32:08.800616   14195 main.go:141] libmachine: Using API Version  1
	I0503 21:32:08.800637   14195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:32:08.800857   14195 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:32:08.801047   14195 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:32:08.801084   14195 main.go:141] libmachine: (addons-146858) Calling .GetState
	I0503 21:32:08.801212   14195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43985
	I0503 21:32:08.802282   14195 main.go:141] libmachine: (addons-146858) Calling .GetState
	I0503 21:32:08.802517   14195 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:32:08.803076   14195 main.go:141] libmachine: Using API Version  1
	I0503 21:32:08.803093   14195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:32:08.803197   14195 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-146858"
	I0503 21:32:08.803239   14195 host.go:66] Checking if "addons-146858" exists ...
	I0503 21:32:08.803409   14195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36629
	I0503 21:32:08.803615   14195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:32:08.803645   14195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:32:08.803753   14195 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:32:08.804192   14195 main.go:141] libmachine: (addons-146858) Calling .GetState
	I0503 21:32:08.804260   14195 main.go:141] libmachine: (addons-146858) Calling .DriverName
	I0503 21:32:08.807402   14195 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0503 21:32:08.805129   14195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36205
	I0503 21:32:08.806143   14195 main.go:141] libmachine: (addons-146858) Calling .DriverName
	I0503 21:32:08.806398   14195 main.go:141] libmachine: (addons-146858) Calling .DriverName
	I0503 21:32:08.806430   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:32:08.807639   14195 main.go:141] libmachine: (addons-146858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8d:3c", ip: ""} in network mk-addons-146858: {Iface:virbr1 ExpiryTime:2024-05-03 22:31:28 +0000 UTC Type:0 Mac:52:54:00:f7:8d:3c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-146858 Clientid:01:52:54:00:f7:8d:3c}
	I0503 21:32:08.807671   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined IP address 192.168.39.58 and MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:32:08.806460   14195 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:32:08.807235   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:32:08.809720   14195 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0503 21:32:08.811084   14195 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0503 21:32:08.807866   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHPort
	I0503 21:32:08.808018   14195 main.go:141] libmachine: (addons-146858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8d:3c", ip: ""} in network mk-addons-146858: {Iface:virbr1 ExpiryTime:2024-05-03 22:31:28 +0000 UTC Type:0 Mac:52:54:00:f7:8d:3c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-146858 Clientid:01:52:54:00:f7:8d:3c}
	I0503 21:32:08.808135   14195 main.go:141] libmachine: Using API Version  1
	I0503 21:32:08.808214   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHPort
	I0503 21:32:08.808496   14195 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:32:08.809736   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0503 21:32:08.810480   14195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33287
	I0503 21:32:08.812509   14195 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0503 21:32:08.812702   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHKeyPath
	I0503 21:32:08.813813   14195 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0503 21:32:08.815315   14195 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0503 21:32:08.815337   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0503 21:32:08.815356   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHHostname
	I0503 21:32:08.813853   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined IP address 192.168.39.58 and MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:32:08.813871   14195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:32:08.813886   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0503 21:32:08.815471   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHHostname
	I0503 21:32:08.813953   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHHostname
	I0503 21:32:08.814165   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHUsername
	I0503 21:32:08.814171   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHKeyPath
	I0503 21:32:08.814362   14195 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:32:08.815055   14195 main.go:141] libmachine: Using API Version  1
	I0503 21:32:08.815606   14195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:32:08.815912   14195 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:32:08.816066   14195 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18793-6010/.minikube/machines/addons-146858/id_rsa Username:docker}
	I0503 21:32:08.816445   14195 main.go:141] libmachine: Using API Version  1
	I0503 21:32:08.816468   14195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:32:08.816530   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHUsername
	I0503 21:32:08.816570   14195 main.go:141] libmachine: (addons-146858) Calling .DriverName
	I0503 21:32:08.818164   14195 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18793-6010/.minikube/machines/addons-146858/id_rsa Username:docker}
	I0503 21:32:08.818700   14195 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:32:08.818750   14195 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:32:08.819378   14195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:32:08.819426   14195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:32:08.820024   14195 main.go:141] libmachine: (addons-146858) Calling .GetState
	I0503 21:32:08.820083   14195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34031
	I0503 21:32:08.820216   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:32:08.820237   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:32:08.820611   14195 main.go:141] libmachine: (addons-146858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8d:3c", ip: ""} in network mk-addons-146858: {Iface:virbr1 ExpiryTime:2024-05-03 22:31:28 +0000 UTC Type:0 Mac:52:54:00:f7:8d:3c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-146858 Clientid:01:52:54:00:f7:8d:3c}
	I0503 21:32:08.820644   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined IP address 192.168.39.58 and MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:32:08.820825   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHPort
	I0503 21:32:08.821001   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHKeyPath
	I0503 21:32:08.821197   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHUsername
	I0503 21:32:08.821260   14195 main.go:141] libmachine: (addons-146858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8d:3c", ip: ""} in network mk-addons-146858: {Iface:virbr1 ExpiryTime:2024-05-03 22:31:28 +0000 UTC Type:0 Mac:52:54:00:f7:8d:3c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-146858 Clientid:01:52:54:00:f7:8d:3c}
	I0503 21:32:08.821275   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined IP address 192.168.39.58 and MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:32:08.821453   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHPort
	I0503 21:32:08.821502   14195 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18793-6010/.minikube/machines/addons-146858/id_rsa Username:docker}
	I0503 21:32:08.821751   14195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41309
	I0503 21:32:08.821894   14195 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:32:08.822016   14195 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:32:08.822612   14195 main.go:141] libmachine: Using API Version  1
	I0503 21:32:08.822627   14195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:32:08.822735   14195 main.go:141] libmachine: Using API Version  1
	I0503 21:32:08.822747   14195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:32:08.822801   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHKeyPath
	I0503 21:32:08.822847   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:32:08.823239   14195 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:32:08.823276   14195 main.go:141] libmachine: (addons-146858) Calling .DriverName
	I0503 21:32:08.823288   14195 main.go:141] libmachine: (addons-146858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8d:3c", ip: ""} in network mk-addons-146858: {Iface:virbr1 ExpiryTime:2024-05-03 22:31:28 +0000 UTC Type:0 Mac:52:54:00:f7:8d:3c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-146858 Clientid:01:52:54:00:f7:8d:3c}
	I0503 21:32:08.823302   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined IP address 192.168.39.58 and MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:32:08.823314   14195 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:32:08.823329   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHUsername
	I0503 21:32:08.823350   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHPort
	I0503 21:32:08.823603   14195 main.go:141] libmachine: (addons-146858) Calling .GetState
	I0503 21:32:08.823642   14195 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18793-6010/.minikube/machines/addons-146858/id_rsa Username:docker}
	I0503 21:32:08.823716   14195 main.go:141] libmachine: (addons-146858) Calling .GetState
	I0503 21:32:08.823721   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHKeyPath
	I0503 21:32:08.826094   14195 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0503 21:32:08.824005   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHUsername
	I0503 21:32:08.825600   14195 main.go:141] libmachine: (addons-146858) Calling .DriverName
	I0503 21:32:08.825953   14195 main.go:141] libmachine: (addons-146858) Calling .DriverName
	I0503 21:32:08.826270   14195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40067
	I0503 21:32:08.826397   14195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44333
	I0503 21:32:08.827588   14195 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0503 21:32:08.827603   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0503 21:32:08.827619   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHHostname
	I0503 21:32:08.828341   14195 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18793-6010/.minikube/machines/addons-146858/id_rsa Username:docker}
	I0503 21:32:08.829737   14195 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.16
	I0503 21:32:08.828696   14195 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:32:08.828931   14195 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:32:08.831109   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:32:08.831296   14195 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0503 21:32:08.831329   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0503 21:32:08.831344   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHHostname
	I0503 21:32:08.831376   14195 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0503 21:32:08.832971   14195 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0503 21:32:08.832987   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0503 21:32:08.833004   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHHostname
	I0503 21:32:08.831490   14195 main.go:141] libmachine: (addons-146858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8d:3c", ip: ""} in network mk-addons-146858: {Iface:virbr1 ExpiryTime:2024-05-03 22:31:28 +0000 UTC Type:0 Mac:52:54:00:f7:8d:3c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-146858 Clientid:01:52:54:00:f7:8d:3c}
	I0503 21:32:08.833053   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined IP address 192.168.39.58 and MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:32:08.831680   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHPort
	I0503 21:32:08.831993   14195 main.go:141] libmachine: Using API Version  1
	I0503 21:32:08.833097   14195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:32:08.832121   14195 main.go:141] libmachine: Using API Version  1
	I0503 21:32:08.833121   14195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:32:08.833634   14195 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:32:08.833642   14195 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:32:08.833692   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHKeyPath
	I0503 21:32:08.833848   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHUsername
	I0503 21:32:08.833911   14195 main.go:141] libmachine: (addons-146858) Calling .GetState
	I0503 21:32:08.834131   14195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:32:08.834160   14195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:32:08.834675   14195 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18793-6010/.minikube/machines/addons-146858/id_rsa Username:docker}
	I0503 21:32:08.836136   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:32:08.836176   14195 main.go:141] libmachine: (addons-146858) Calling .DriverName
	I0503 21:32:08.837936   14195 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0503 21:32:08.839159   14195 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0503 21:32:08.839171   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0503 21:32:08.839184   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHHostname
	I0503 21:32:08.837653   14195 main.go:141] libmachine: (addons-146858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8d:3c", ip: ""} in network mk-addons-146858: {Iface:virbr1 ExpiryTime:2024-05-03 22:31:28 +0000 UTC Type:0 Mac:52:54:00:f7:8d:3c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-146858 Clientid:01:52:54:00:f7:8d:3c}
	I0503 21:32:08.839223   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined IP address 192.168.39.58 and MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:32:08.837704   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHPort
	I0503 21:32:08.837887   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:32:08.839312   14195 main.go:141] libmachine: (addons-146858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8d:3c", ip: ""} in network mk-addons-146858: {Iface:virbr1 ExpiryTime:2024-05-03 22:31:28 +0000 UTC Type:0 Mac:52:54:00:f7:8d:3c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-146858 Clientid:01:52:54:00:f7:8d:3c}
	I0503 21:32:08.839335   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined IP address 192.168.39.58 and MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:32:08.838485   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHPort
	I0503 21:32:08.839400   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHKeyPath
	I0503 21:32:08.839473   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHKeyPath
	I0503 21:32:08.839508   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHUsername
	I0503 21:32:08.839618   14195 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18793-6010/.minikube/machines/addons-146858/id_rsa Username:docker}
	I0503 21:32:08.839871   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHUsername
	I0503 21:32:08.840038   14195 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18793-6010/.minikube/machines/addons-146858/id_rsa Username:docker}
	I0503 21:32:08.841996   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:32:08.842330   14195 main.go:141] libmachine: (addons-146858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8d:3c", ip: ""} in network mk-addons-146858: {Iface:virbr1 ExpiryTime:2024-05-03 22:31:28 +0000 UTC Type:0 Mac:52:54:00:f7:8d:3c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-146858 Clientid:01:52:54:00:f7:8d:3c}
	I0503 21:32:08.842348   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined IP address 192.168.39.58 and MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:32:08.842627   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHPort
	I0503 21:32:08.842808   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHKeyPath
	I0503 21:32:08.842948   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHUsername
	I0503 21:32:08.843070   14195 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18793-6010/.minikube/machines/addons-146858/id_rsa Username:docker}
	I0503 21:32:08.843305   14195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43303
	I0503 21:32:08.843698   14195 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:32:08.844149   14195 main.go:141] libmachine: Using API Version  1
	I0503 21:32:08.844159   14195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:32:08.844521   14195 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:32:08.844632   14195 main.go:141] libmachine: (addons-146858) Calling .GetState
	I0503 21:32:08.846020   14195 main.go:141] libmachine: (addons-146858) Calling .DriverName
	I0503 21:32:08.846223   14195 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0503 21:32:08.846235   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0503 21:32:08.846249   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHHostname
	I0503 21:32:08.848731   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:32:08.849167   14195 main.go:141] libmachine: (addons-146858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8d:3c", ip: ""} in network mk-addons-146858: {Iface:virbr1 ExpiryTime:2024-05-03 22:31:28 +0000 UTC Type:0 Mac:52:54:00:f7:8d:3c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-146858 Clientid:01:52:54:00:f7:8d:3c}
	I0503 21:32:08.849185   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined IP address 192.168.39.58 and MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:32:08.849306   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHPort
	I0503 21:32:08.849449   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHKeyPath
	I0503 21:32:08.849569   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHUsername
	I0503 21:32:08.849720   14195 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18793-6010/.minikube/machines/addons-146858/id_rsa Username:docker}
	I0503 21:32:08.854717   14195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41421
	I0503 21:32:08.855037   14195 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:32:08.855454   14195 main.go:141] libmachine: Using API Version  1
	I0503 21:32:08.855464   14195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:32:08.855818   14195 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:32:08.856030   14195 main.go:141] libmachine: (addons-146858) Calling .GetState
	I0503 21:32:08.857765   14195 main.go:141] libmachine: (addons-146858) Calling .DriverName
	I0503 21:32:08.860403   14195 out.go:177]   - Using image docker.io/busybox:stable
	I0503 21:32:08.862252   14195 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0503 21:32:08.863580   14195 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0503 21:32:08.863592   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0503 21:32:08.863606   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHHostname
	I0503 21:32:08.866694   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:32:08.867025   14195 main.go:141] libmachine: (addons-146858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8d:3c", ip: ""} in network mk-addons-146858: {Iface:virbr1 ExpiryTime:2024-05-03 22:31:28 +0000 UTC Type:0 Mac:52:54:00:f7:8d:3c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-146858 Clientid:01:52:54:00:f7:8d:3c}
	I0503 21:32:08.867039   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined IP address 192.168.39.58 and MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:32:08.867280   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHPort
	I0503 21:32:08.867420   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHKeyPath
	I0503 21:32:08.867543   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHUsername
	I0503 21:32:08.867622   14195 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18793-6010/.minikube/machines/addons-146858/id_rsa Username:docker}
	W0503 21:32:08.874748   14195 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:55136->192.168.39.58:22: read: connection reset by peer
	I0503 21:32:08.874793   14195 retry.go:31] will retry after 351.145559ms: ssh: handshake failed: read tcp 192.168.39.1:55136->192.168.39.58:22: read: connection reset by peer
	I0503 21:32:09.301146   14195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0503 21:32:09.393143   14195 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0503 21:32:09.393167   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0503 21:32:09.409058   14195 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0503 21:32:09.409079   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0503 21:32:09.411804   14195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0503 21:32:09.457795   14195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0503 21:32:09.458037   14195 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0503 21:32:09.492680   14195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0503 21:32:09.519081   14195 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0503 21:32:09.519108   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0503 21:32:09.521519   14195 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0503 21:32:09.521538   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0503 21:32:09.522597   14195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0503 21:32:09.531610   14195 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0503 21:32:09.531629   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0503 21:32:09.539537   14195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0503 21:32:09.550632   14195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0503 21:32:09.569935   14195 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0503 21:32:09.569959   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0503 21:32:09.573441   14195 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0503 21:32:09.573462   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0503 21:32:09.659844   14195 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0503 21:32:09.659867   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0503 21:32:09.676900   14195 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0503 21:32:09.676922   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0503 21:32:09.726151   14195 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0503 21:32:09.726173   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0503 21:32:09.759941   14195 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0503 21:32:09.759963   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0503 21:32:09.840467   14195 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0503 21:32:09.840488   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0503 21:32:09.890717   14195 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0503 21:32:09.890749   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0503 21:32:09.896671   14195 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0503 21:32:09.896692   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0503 21:32:09.902453   14195 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0503 21:32:09.902471   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0503 21:32:09.931432   14195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0503 21:32:10.031261   14195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0503 21:32:10.094747   14195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0503 21:32:10.096163   14195 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0503 21:32:10.096183   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0503 21:32:10.096671   14195 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0503 21:32:10.096691   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0503 21:32:10.143600   14195 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0503 21:32:10.143623   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0503 21:32:10.167646   14195 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0503 21:32:10.167678   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0503 21:32:10.171090   14195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0503 21:32:10.477035   14195 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0503 21:32:10.477057   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0503 21:32:10.526173   14195 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0503 21:32:10.526203   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0503 21:32:10.547908   14195 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0503 21:32:10.547928   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0503 21:32:10.911955   14195 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0503 21:32:10.911995   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0503 21:32:11.148335   14195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0503 21:32:11.184534   14195 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0503 21:32:11.184558   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0503 21:32:11.249237   14195 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0503 21:32:11.249263   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0503 21:32:11.420130   14195 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0503 21:32:11.420153   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0503 21:32:11.432922   14195 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.131739742s)
	I0503 21:32:11.432978   14195 main.go:141] libmachine: Making call to close driver server
	I0503 21:32:11.432990   14195 main.go:141] libmachine: (addons-146858) Calling .Close
	I0503 21:32:11.432991   14195 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.021153425s)
	I0503 21:32:11.433029   14195 main.go:141] libmachine: Making call to close driver server
	I0503 21:32:11.433044   14195 main.go:141] libmachine: (addons-146858) Calling .Close
	I0503 21:32:11.433376   14195 main.go:141] libmachine: (addons-146858) DBG | Closing plugin on server side
	I0503 21:32:11.433387   14195 main.go:141] libmachine: Successfully made call to close driver server
	I0503 21:32:11.433396   14195 main.go:141] libmachine: Successfully made call to close driver server
	I0503 21:32:11.433410   14195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0503 21:32:11.433426   14195 main.go:141] libmachine: Making call to close driver server
	I0503 21:32:11.433444   14195 main.go:141] libmachine: (addons-146858) Calling .Close
	I0503 21:32:11.433409   14195 main.go:141] libmachine: (addons-146858) DBG | Closing plugin on server side
	I0503 21:32:11.433411   14195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0503 21:32:11.433504   14195 main.go:141] libmachine: Making call to close driver server
	I0503 21:32:11.433530   14195 main.go:141] libmachine: (addons-146858) Calling .Close
	I0503 21:32:11.433638   14195 main.go:141] libmachine: Successfully made call to close driver server
	I0503 21:32:11.433653   14195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0503 21:32:11.433714   14195 main.go:141] libmachine: Successfully made call to close driver server
	I0503 21:32:11.433724   14195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0503 21:32:11.474646   14195 main.go:141] libmachine: Making call to close driver server
	I0503 21:32:11.474673   14195 main.go:141] libmachine: (addons-146858) Calling .Close
	I0503 21:32:11.474939   14195 main.go:141] libmachine: (addons-146858) DBG | Closing plugin on server side
	I0503 21:32:11.474943   14195 main.go:141] libmachine: Successfully made call to close driver server
	I0503 21:32:11.474958   14195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0503 21:32:11.475002   14195 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0503 21:32:11.475025   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0503 21:32:11.516139   14195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0503 21:32:11.706654   14195 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0503 21:32:11.706684   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0503 21:32:11.739594   14195 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0503 21:32:11.739622   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0503 21:32:12.150698   14195 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0503 21:32:12.150719   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0503 21:32:12.217222   14195 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0503 21:32:12.217249   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0503 21:32:12.505106   14195 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0503 21:32:12.505127   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0503 21:32:12.567635   14195 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.109806132s)
	I0503 21:32:12.567631   14195 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.109537659s)
	I0503 21:32:12.567775   14195 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0503 21:32:12.568360   14195 node_ready.go:35] waiting up to 6m0s for node "addons-146858" to be "Ready" ...
	I0503 21:32:12.576772   14195 node_ready.go:49] node "addons-146858" has status "Ready":"True"
	I0503 21:32:12.576801   14195 node_ready.go:38] duration metric: took 8.418349ms for node "addons-146858" to be "Ready" ...
	I0503 21:32:12.576829   14195 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0503 21:32:12.590695   14195 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-45l5x" in "kube-system" namespace to be "Ready" ...
	I0503 21:32:12.611328   14195 pod_ready.go:92] pod "coredns-7db6d8ff4d-45l5x" in "kube-system" namespace has status "Ready":"True"
	I0503 21:32:12.611356   14195 pod_ready.go:81] duration metric: took 20.626861ms for pod "coredns-7db6d8ff4d-45l5x" in "kube-system" namespace to be "Ready" ...
	I0503 21:32:12.611365   14195 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bs2xx" in "kube-system" namespace to be "Ready" ...
	I0503 21:32:12.656770   14195 pod_ready.go:92] pod "coredns-7db6d8ff4d-bs2xx" in "kube-system" namespace has status "Ready":"True"
	I0503 21:32:12.656791   14195 pod_ready.go:81] duration metric: took 45.420847ms for pod "coredns-7db6d8ff4d-bs2xx" in "kube-system" namespace to be "Ready" ...
	I0503 21:32:12.656800   14195 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-146858" in "kube-system" namespace to be "Ready" ...
	I0503 21:32:12.682148   14195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0503 21:32:12.686964   14195 pod_ready.go:92] pod "etcd-addons-146858" in "kube-system" namespace has status "Ready":"True"
	I0503 21:32:12.686989   14195 pod_ready.go:81] duration metric: took 30.183689ms for pod "etcd-addons-146858" in "kube-system" namespace to be "Ready" ...
	I0503 21:32:12.686999   14195 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-146858" in "kube-system" namespace to be "Ready" ...
	I0503 21:32:12.714115   14195 pod_ready.go:92] pod "kube-apiserver-addons-146858" in "kube-system" namespace has status "Ready":"True"
	I0503 21:32:12.714138   14195 pod_ready.go:81] duration metric: took 27.132359ms for pod "kube-apiserver-addons-146858" in "kube-system" namespace to be "Ready" ...
	I0503 21:32:12.714150   14195 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-146858" in "kube-system" namespace to be "Ready" ...
	I0503 21:32:12.985718   14195 pod_ready.go:92] pod "kube-controller-manager-addons-146858" in "kube-system" namespace has status "Ready":"True"
	I0503 21:32:12.985747   14195 pod_ready.go:81] duration metric: took 271.59053ms for pod "kube-controller-manager-addons-146858" in "kube-system" namespace to be "Ready" ...
	I0503 21:32:12.985757   14195 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tx6v2" in "kube-system" namespace to be "Ready" ...
	I0503 21:32:12.999274   14195 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0503 21:32:12.999296   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0503 21:32:13.085860   14195 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-146858" context rescaled to 1 replicas
	I0503 21:32:13.303237   14195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0503 21:32:13.378597   14195 pod_ready.go:92] pod "kube-proxy-tx6v2" in "kube-system" namespace has status "Ready":"True"
	I0503 21:32:13.378622   14195 pod_ready.go:81] duration metric: took 392.859505ms for pod "kube-proxy-tx6v2" in "kube-system" namespace to be "Ready" ...
	I0503 21:32:13.378632   14195 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-146858" in "kube-system" namespace to be "Ready" ...
	I0503 21:32:13.804943   14195 pod_ready.go:92] pod "kube-scheduler-addons-146858" in "kube-system" namespace has status "Ready":"True"
	I0503 21:32:13.804975   14195 pod_ready.go:81] duration metric: took 426.335657ms for pod "kube-scheduler-addons-146858" in "kube-system" namespace to be "Ready" ...
	I0503 21:32:13.804987   14195 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-mwfx8" in "kube-system" namespace to be "Ready" ...
	I0503 21:32:15.835953   14195 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-mwfx8" in "kube-system" namespace has status "Ready":"False"
	I0503 21:32:15.847394   14195 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0503 21:32:15.847430   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHHostname
	I0503 21:32:15.850473   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:32:15.850946   14195 main.go:141] libmachine: (addons-146858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8d:3c", ip: ""} in network mk-addons-146858: {Iface:virbr1 ExpiryTime:2024-05-03 22:31:28 +0000 UTC Type:0 Mac:52:54:00:f7:8d:3c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-146858 Clientid:01:52:54:00:f7:8d:3c}
	I0503 21:32:15.850977   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined IP address 192.168.39.58 and MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:32:15.851153   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHPort
	I0503 21:32:15.851334   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHKeyPath
	I0503 21:32:15.851472   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHUsername
	I0503 21:32:15.851581   14195 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18793-6010/.minikube/machines/addons-146858/id_rsa Username:docker}
	I0503 21:32:16.395254   14195 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0503 21:32:16.791028   14195 addons.go:234] Setting addon gcp-auth=true in "addons-146858"
	I0503 21:32:16.791091   14195 host.go:66] Checking if "addons-146858" exists ...
	I0503 21:32:16.791411   14195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:32:16.791439   14195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:32:16.807375   14195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44185
	I0503 21:32:16.807856   14195 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:32:16.808408   14195 main.go:141] libmachine: Using API Version  1
	I0503 21:32:16.808427   14195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:32:16.808777   14195 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:32:16.809394   14195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:32:16.809430   14195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:32:16.824826   14195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45027
	I0503 21:32:16.825780   14195 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:32:16.826332   14195 main.go:141] libmachine: Using API Version  1
	I0503 21:32:16.826355   14195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:32:16.826648   14195 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:32:16.826830   14195 main.go:141] libmachine: (addons-146858) Calling .GetState
	I0503 21:32:16.828264   14195 main.go:141] libmachine: (addons-146858) Calling .DriverName
	I0503 21:32:16.828496   14195 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0503 21:32:16.828523   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHHostname
	I0503 21:32:16.831499   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:32:16.831911   14195 main.go:141] libmachine: (addons-146858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:8d:3c", ip: ""} in network mk-addons-146858: {Iface:virbr1 ExpiryTime:2024-05-03 22:31:28 +0000 UTC Type:0 Mac:52:54:00:f7:8d:3c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-146858 Clientid:01:52:54:00:f7:8d:3c}
	I0503 21:32:16.831942   14195 main.go:141] libmachine: (addons-146858) DBG | domain addons-146858 has defined IP address 192.168.39.58 and MAC address 52:54:00:f7:8d:3c in network mk-addons-146858
	I0503 21:32:16.832081   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHPort
	I0503 21:32:16.832274   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHKeyPath
	I0503 21:32:16.832459   14195 main.go:141] libmachine: (addons-146858) Calling .GetSSHUsername
	I0503 21:32:16.832603   14195 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18793-6010/.minikube/machines/addons-146858/id_rsa Username:docker}
	I0503 21:32:18.375773   14195 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-mwfx8" in "kube-system" namespace has status "Ready":"False"
	I0503 21:32:18.618462   14195 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.12573562s)
	I0503 21:32:18.618491   14195 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.095868643s)
	I0503 21:32:18.618513   14195 main.go:141] libmachine: Making call to close driver server
	I0503 21:32:18.618524   14195 main.go:141] libmachine: (addons-146858) Calling .Close
	I0503 21:32:18.618523   14195 main.go:141] libmachine: Making call to close driver server
	I0503 21:32:18.618537   14195 main.go:141] libmachine: (addons-146858) Calling .Close
	I0503 21:32:18.618588   14195 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.079026759s)
	I0503 21:32:18.618618   14195 main.go:141] libmachine: Making call to close driver server
	I0503 21:32:18.618623   14195 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.067965254s)
	I0503 21:32:18.618628   14195 main.go:141] libmachine: (addons-146858) Calling .Close
	I0503 21:32:18.618638   14195 main.go:141] libmachine: Making call to close driver server
	I0503 21:32:18.618653   14195 main.go:141] libmachine: (addons-146858) Calling .Close
	I0503 21:32:18.618705   14195 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.687238289s)
	I0503 21:32:18.618718   14195 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.587418265s)
	I0503 21:32:18.618733   14195 main.go:141] libmachine: Making call to close driver server
	I0503 21:32:18.618733   14195 main.go:141] libmachine: Making call to close driver server
	I0503 21:32:18.618746   14195 main.go:141] libmachine: (addons-146858) Calling .Close
	I0503 21:32:18.618775   14195 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.523999172s)
	I0503 21:32:18.618786   14195 main.go:141] libmachine: (addons-146858) Calling .Close
	I0503 21:32:18.618788   14195 main.go:141] libmachine: Making call to close driver server
	I0503 21:32:18.618797   14195 main.go:141] libmachine: (addons-146858) Calling .Close
	I0503 21:32:18.618890   14195 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.447776981s)
	I0503 21:32:18.618906   14195 main.go:141] libmachine: Making call to close driver server
	I0503 21:32:18.618915   14195 main.go:141] libmachine: (addons-146858) Calling .Close
	I0503 21:32:18.618973   14195 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.470614069s)
	I0503 21:32:18.618987   14195 main.go:141] libmachine: Making call to close driver server
	I0503 21:32:18.618995   14195 main.go:141] libmachine: (addons-146858) Calling .Close
	I0503 21:32:18.618997   14195 main.go:141] libmachine: Successfully made call to close driver server
	I0503 21:32:18.619011   14195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0503 21:32:18.619021   14195 main.go:141] libmachine: Making call to close driver server
	I0503 21:32:18.619029   14195 main.go:141] libmachine: (addons-146858) Calling .Close
	I0503 21:32:18.619123   14195 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.102951775s)
	I0503 21:32:18.619139   14195 main.go:141] libmachine: (addons-146858) DBG | Closing plugin on server side
	I0503 21:32:18.619162   14195 main.go:141] libmachine: (addons-146858) DBG | Closing plugin on server side
	W0503 21:32:18.619167   14195 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0503 21:32:18.619192   14195 retry.go:31] will retry after 347.548223ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0503 21:32:18.619227   14195 main.go:141] libmachine: Successfully made call to close driver server
	I0503 21:32:18.619236   14195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0503 21:32:18.619239   14195 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.937060212s)
	I0503 21:32:18.619243   14195 main.go:141] libmachine: Making call to close driver server
	I0503 21:32:18.619251   14195 main.go:141] libmachine: (addons-146858) Calling .Close
	I0503 21:32:18.619258   14195 main.go:141] libmachine: Making call to close driver server
	I0503 21:32:18.619274   14195 main.go:141] libmachine: (addons-146858) Calling .Close
	I0503 21:32:18.619353   14195 main.go:141] libmachine: Successfully made call to close driver server
	I0503 21:32:18.619369   14195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0503 21:32:18.619377   14195 main.go:141] libmachine: Making call to close driver server
	I0503 21:32:18.619392   14195 main.go:141] libmachine: (addons-146858) Calling .Close
	I0503 21:32:18.619512   14195 main.go:141] libmachine: (addons-146858) DBG | Closing plugin on server side
	I0503 21:32:18.619545   14195 main.go:141] libmachine: (addons-146858) DBG | Closing plugin on server side
	I0503 21:32:18.619565   14195 main.go:141] libmachine: Successfully made call to close driver server
	I0503 21:32:18.619572   14195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0503 21:32:18.619579   14195 main.go:141] libmachine: Making call to close driver server
	I0503 21:32:18.619586   14195 main.go:141] libmachine: (addons-146858) Calling .Close
	I0503 21:32:18.619624   14195 main.go:141] libmachine: (addons-146858) DBG | Closing plugin on server side
	I0503 21:32:18.619668   14195 main.go:141] libmachine: Successfully made call to close driver server
	I0503 21:32:18.619676   14195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0503 21:32:18.619683   14195 main.go:141] libmachine: Making call to close driver server
	I0503 21:32:18.619690   14195 main.go:141] libmachine: (addons-146858) Calling .Close
	I0503 21:32:18.619700   14195 main.go:141] libmachine: Successfully made call to close driver server
	I0503 21:32:18.619718   14195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0503 21:32:18.619726   14195 main.go:141] libmachine: (addons-146858) DBG | Closing plugin on server side
	I0503 21:32:18.619727   14195 main.go:141] libmachine: Making call to close driver server
	I0503 21:32:18.619739   14195 main.go:141] libmachine: (addons-146858) Calling .Close
	I0503 21:32:18.619772   14195 main.go:141] libmachine: (addons-146858) DBG | Closing plugin on server side
	I0503 21:32:18.619794   14195 main.go:141] libmachine: Successfully made call to close driver server
	I0503 21:32:18.619800   14195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0503 21:32:18.619806   14195 main.go:141] libmachine: Making call to close driver server
	I0503 21:32:18.619812   14195 main.go:141] libmachine: (addons-146858) Calling .Close
	I0503 21:32:18.619846   14195 main.go:141] libmachine: Successfully made call to close driver server
	I0503 21:32:18.619852   14195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0503 21:32:18.619860   14195 main.go:141] libmachine: Making call to close driver server
	I0503 21:32:18.619866   14195 main.go:141] libmachine: (addons-146858) Calling .Close
	I0503 21:32:18.619977   14195 main.go:141] libmachine: (addons-146858) DBG | Closing plugin on server side
	I0503 21:32:18.620009   14195 main.go:141] libmachine: Successfully made call to close driver server
	I0503 21:32:18.620046   14195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0503 21:32:18.620406   14195 main.go:141] libmachine: (addons-146858) DBG | Closing plugin on server side
	I0503 21:32:18.620438   14195 main.go:141] libmachine: Successfully made call to close driver server
	I0503 21:32:18.620445   14195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0503 21:32:18.620982   14195 main.go:141] libmachine: (addons-146858) DBG | Closing plugin on server side
	I0503 21:32:18.621000   14195 main.go:141] libmachine: Successfully made call to close driver server
	I0503 21:32:18.621008   14195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0503 21:32:18.621016   14195 main.go:141] libmachine: (addons-146858) DBG | Closing plugin on server side
	I0503 21:32:18.621037   14195 main.go:141] libmachine: Successfully made call to close driver server
	I0503 21:32:18.621047   14195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0503 21:32:18.621055   14195 addons.go:470] Verifying addon ingress=true in "addons-146858"
	I0503 21:32:18.624416   14195 out.go:177] * Verifying ingress addon...
	I0503 21:32:18.621484   14195 main.go:141] libmachine: (addons-146858) DBG | Closing plugin on server side
	I0503 21:32:18.621504   14195 main.go:141] libmachine: Successfully made call to close driver server
	I0503 21:32:18.621522   14195 main.go:141] libmachine: (addons-146858) DBG | Closing plugin on server side
	I0503 21:32:18.621543   14195 main.go:141] libmachine: Successfully made call to close driver server
	I0503 21:32:18.621559   14195 main.go:141] libmachine: (addons-146858) DBG | Closing plugin on server side
	I0503 21:32:18.621573   14195 main.go:141] libmachine: (addons-146858) DBG | Closing plugin on server side
	I0503 21:32:18.621590   14195 main.go:141] libmachine: Successfully made call to close driver server
	I0503 21:32:18.621610   14195 main.go:141] libmachine: Successfully made call to close driver server
	I0503 21:32:18.621648   14195 main.go:141] libmachine: Successfully made call to close driver server
	I0503 21:32:18.621662   14195 main.go:141] libmachine: (addons-146858) DBG | Closing plugin on server side
	I0503 21:32:18.623533   14195 main.go:141] libmachine: (addons-146858) DBG | Closing plugin on server side
	I0503 21:32:18.623558   14195 main.go:141] libmachine: Successfully made call to close driver server
	I0503 21:32:18.624474   14195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0503 21:32:18.624452   14195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0503 21:32:18.624483   14195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0503 21:32:18.624491   14195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0503 21:32:18.625931   14195 main.go:141] libmachine: Making call to close driver server
	I0503 21:32:18.625942   14195 main.go:141] libmachine: (addons-146858) Calling .Close
	I0503 21:32:18.625942   14195 addons.go:470] Verifying addon metrics-server=true in "addons-146858"
	I0503 21:32:18.624498   14195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0503 21:32:18.625965   14195 addons.go:470] Verifying addon registry=true in "addons-146858"
	I0503 21:32:18.624503   14195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0503 21:32:18.625882   14195 main.go:141] libmachine: Making call to close driver server
	I0503 21:32:18.625999   14195 main.go:141] libmachine: (addons-146858) Calling .Close
	I0503 21:32:18.627222   14195 out.go:177] * Verifying registry addon...
	I0503 21:32:18.626270   14195 main.go:141] libmachine: (addons-146858) DBG | Closing plugin on server side
	I0503 21:32:18.626273   14195 main.go:141] libmachine: Successfully made call to close driver server
	I0503 21:32:18.628767   14195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0503 21:32:18.626277   14195 main.go:141] libmachine: (addons-146858) DBG | Closing plugin on server side
	I0503 21:32:18.626294   14195 main.go:141] libmachine: Successfully made call to close driver server
	I0503 21:32:18.626616   14195 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0503 21:32:18.629321   14195 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0503 21:32:18.630375   14195 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-146858 service yakd-dashboard -n yakd-dashboard
	
	I0503 21:32:18.630414   14195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0503 21:32:18.659472   14195 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0503 21:32:18.659509   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:18.663968   14195 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0503 21:32:18.663994   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:18.678851   14195 main.go:141] libmachine: Making call to close driver server
	I0503 21:32:18.678892   14195 main.go:141] libmachine: (addons-146858) Calling .Close
	I0503 21:32:18.679156   14195 main.go:141] libmachine: Successfully made call to close driver server
	I0503 21:32:18.679177   14195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0503 21:32:18.967429   14195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0503 21:32:19.135440   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:19.140368   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:19.660527   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:19.660685   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:19.663434   14195 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.834922585s)
	I0503 21:32:19.665448   14195 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0503 21:32:19.663426   14195 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.360133946s)
	I0503 21:32:19.667108   14195 main.go:141] libmachine: Making call to close driver server
	I0503 21:32:19.667119   14195 main.go:141] libmachine: (addons-146858) Calling .Close
	I0503 21:32:19.668786   14195 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0503 21:32:19.667379   14195 main.go:141] libmachine: Successfully made call to close driver server
	I0503 21:32:19.667413   14195 main.go:141] libmachine: (addons-146858) DBG | Closing plugin on server side
	I0503 21:32:19.670145   14195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0503 21:32:19.670157   14195 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0503 21:32:19.670166   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0503 21:32:19.670173   14195 main.go:141] libmachine: Making call to close driver server
	I0503 21:32:19.670185   14195 main.go:141] libmachine: (addons-146858) Calling .Close
	I0503 21:32:19.670430   14195 main.go:141] libmachine: Successfully made call to close driver server
	I0503 21:32:19.670444   14195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0503 21:32:19.670455   14195 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-146858"
	I0503 21:32:19.670506   14195 main.go:141] libmachine: (addons-146858) DBG | Closing plugin on server side
	I0503 21:32:19.672101   14195 out.go:177] * Verifying csi-hostpath-driver addon...
	I0503 21:32:19.674028   14195 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0503 21:32:19.722936   14195 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0503 21:32:19.722965   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:19.730615   14195 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0503 21:32:19.730641   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0503 21:32:19.795400   14195 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0503 21:32:19.795421   14195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0503 21:32:19.882470   14195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0503 21:32:20.137211   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:20.137493   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:20.179909   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:20.636908   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:20.637647   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:20.680804   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:20.812551   14195 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-mwfx8" in "kube-system" namespace has status "Ready":"False"
	I0503 21:32:20.862334   14195 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.894836127s)
	I0503 21:32:20.862386   14195 main.go:141] libmachine: Making call to close driver server
	I0503 21:32:20.862399   14195 main.go:141] libmachine: (addons-146858) Calling .Close
	I0503 21:32:20.862649   14195 main.go:141] libmachine: (addons-146858) DBG | Closing plugin on server side
	I0503 21:32:20.862681   14195 main.go:141] libmachine: Successfully made call to close driver server
	I0503 21:32:20.862688   14195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0503 21:32:20.862695   14195 main.go:141] libmachine: Making call to close driver server
	I0503 21:32:20.862701   14195 main.go:141] libmachine: (addons-146858) Calling .Close
	I0503 21:32:20.862971   14195 main.go:141] libmachine: Successfully made call to close driver server
	I0503 21:32:20.862982   14195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0503 21:32:21.147254   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:21.147629   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:21.197618   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:21.522231   14195 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.639711907s)
	I0503 21:32:21.522293   14195 main.go:141] libmachine: Making call to close driver server
	I0503 21:32:21.522306   14195 main.go:141] libmachine: (addons-146858) Calling .Close
	I0503 21:32:21.522599   14195 main.go:141] libmachine: Successfully made call to close driver server
	I0503 21:32:21.522646   14195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0503 21:32:21.522664   14195 main.go:141] libmachine: Making call to close driver server
	I0503 21:32:21.522681   14195 main.go:141] libmachine: (addons-146858) Calling .Close
	I0503 21:32:21.522731   14195 main.go:141] libmachine: (addons-146858) DBG | Closing plugin on server side
	I0503 21:32:21.522922   14195 main.go:141] libmachine: Successfully made call to close driver server
	I0503 21:32:21.522956   14195 main.go:141] libmachine: Making call to close connection to plugin binary
	I0503 21:32:21.522999   14195 main.go:141] libmachine: (addons-146858) DBG | Closing plugin on server side
	I0503 21:32:21.524605   14195 addons.go:470] Verifying addon gcp-auth=true in "addons-146858"
	I0503 21:32:21.526474   14195 out.go:177] * Verifying gcp-auth addon...
	I0503 21:32:21.529597   14195 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0503 21:32:21.594583   14195 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0503 21:32:21.594611   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:21.647728   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:21.648728   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:21.681183   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:22.033608   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:22.141410   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:22.142268   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:22.180822   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:22.533520   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:22.637451   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:22.637790   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:22.678572   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:23.034133   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:23.137531   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:23.137629   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:23.178579   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:23.312330   14195 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-mwfx8" in "kube-system" namespace has status "Ready":"False"
	I0503 21:32:23.533574   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:23.638234   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:23.638585   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:23.679209   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:24.032656   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:24.135833   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:24.136405   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:24.180417   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:24.533098   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:24.636165   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:24.637800   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:24.678942   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:25.033198   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:25.177947   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:25.178194   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:25.336206   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:25.339529   14195 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-mwfx8" in "kube-system" namespace has status "Ready":"False"
	I0503 21:32:25.647785   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:25.648594   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:25.648703   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:25.680787   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:26.033438   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:26.136173   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:26.136228   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:26.180266   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:26.534433   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:26.636691   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:26.638856   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:26.680010   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:27.032888   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:27.135511   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:27.135558   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:27.179395   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:27.535896   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:27.635821   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:27.637534   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:27.680009   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:27.813114   14195 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-mwfx8" in "kube-system" namespace has status "Ready":"True"
	I0503 21:32:27.813137   14195 pod_ready.go:81] duration metric: took 14.008141425s for pod "nvidia-device-plugin-daemonset-mwfx8" in "kube-system" namespace to be "Ready" ...
	I0503 21:32:27.813146   14195 pod_ready.go:38] duration metric: took 15.236284586s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0503 21:32:27.813165   14195 api_server.go:52] waiting for apiserver process to appear ...
	I0503 21:32:27.813216   14195 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0503 21:32:27.832292   14195 api_server.go:72] duration metric: took 19.158209315s to wait for apiserver process to appear ...
	I0503 21:32:27.832314   14195 api_server.go:88] waiting for apiserver healthz status ...
	I0503 21:32:27.832333   14195 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I0503 21:32:27.836625   14195 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I0503 21:32:27.837608   14195 api_server.go:141] control plane version: v1.30.0
	I0503 21:32:27.837634   14195 api_server.go:131] duration metric: took 5.313125ms to wait for apiserver health ...
	I0503 21:32:27.837644   14195 system_pods.go:43] waiting for kube-system pods to appear ...
	I0503 21:32:27.846919   14195 system_pods.go:59] 18 kube-system pods found
	I0503 21:32:27.846951   14195 system_pods.go:61] "coredns-7db6d8ff4d-45l5x" [c4827893-9aec-46e7-8433-39dc69d657c5] Running
	I0503 21:32:27.846960   14195 system_pods.go:61] "csi-hostpath-attacher-0" [39767ca6-602c-4ebe-8a7c-2f94cfa40ef0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0503 21:32:27.846966   14195 system_pods.go:61] "csi-hostpath-resizer-0" [fcd35767-f994-4c7f-94a7-60e577529c2f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0503 21:32:27.846972   14195 system_pods.go:61] "csi-hostpathplugin-2pg78" [ecf1b94f-3f96-4eac-8115-e778976a9b8b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0503 21:32:27.846979   14195 system_pods.go:61] "etcd-addons-146858" [f0f1a505-493a-4f5a-be9b-0e6689a13caf] Running
	I0503 21:32:27.846983   14195 system_pods.go:61] "kube-apiserver-addons-146858" [3b0d2aaf-9c05-4e1c-8aa0-9d66bb2988a0] Running
	I0503 21:32:27.846990   14195 system_pods.go:61] "kube-controller-manager-addons-146858" [4c3f5351-0081-4f14-89cf-eb479ffdf61e] Running
	I0503 21:32:27.847003   14195 system_pods.go:61] "kube-ingress-dns-minikube" [695241dc-f6c2-4bf2-9ff6-5cc2007acb6e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0503 21:32:27.847011   14195 system_pods.go:61] "kube-proxy-tx6v2" [75b06b54-37ea-4b9e-a112-aec8ef406682] Running
	I0503 21:32:27.847015   14195 system_pods.go:61] "kube-scheduler-addons-146858" [f9215d98-b7aa-4a44-8360-28c12eddd16e] Running
	I0503 21:32:27.847020   14195 system_pods.go:61] "metrics-server-c59844bb4-sntqz" [10c0ce9e-46a5-44f6-b11e-c88357ae3a30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0503 21:32:27.847027   14195 system_pods.go:61] "nvidia-device-plugin-daemonset-mwfx8" [927e681f-9b2a-492e-9276-e9b8f9d5e724] Running
	I0503 21:32:27.847032   14195 system_pods.go:61] "registry-ngbjf" [64997a94-617f-469b-9123-d13774652b03] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0503 21:32:27.847039   14195 system_pods.go:61] "registry-proxy-lmkbf" [f7405a25-3cc5-4e99-a9b2-e79b705f75a9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0503 21:32:27.847047   14195 system_pods.go:61] "snapshot-controller-745499f584-7ks4n" [7037e023-5263-4a36-80b4-25709e24cc2c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0503 21:32:27.847053   14195 system_pods.go:61] "snapshot-controller-745499f584-lxzzh" [6f7a1e48-4cb4-4f9a-b268-1db5dc478f0d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0503 21:32:27.847059   14195 system_pods.go:61] "storage-provisioner" [4eb93a82-c2db-4910-b4fc-9de189ef3eb6] Running
	I0503 21:32:27.847063   14195 system_pods.go:61] "tiller-deploy-6677d64bcd-6tlnr" [6659d916-b68d-4378-84d0-76c5fd93ba89] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0503 21:32:27.847068   14195 system_pods.go:74] duration metric: took 9.418695ms to wait for pod list to return data ...
	I0503 21:32:27.847077   14195 default_sa.go:34] waiting for default service account to be created ...
	I0503 21:32:27.849108   14195 default_sa.go:45] found service account: "default"
	I0503 21:32:27.849125   14195 default_sa.go:55] duration metric: took 2.040558ms for default service account to be created ...
	I0503 21:32:27.849131   14195 system_pods.go:116] waiting for k8s-apps to be running ...
	I0503 21:32:27.858124   14195 system_pods.go:86] 18 kube-system pods found
	I0503 21:32:27.858147   14195 system_pods.go:89] "coredns-7db6d8ff4d-45l5x" [c4827893-9aec-46e7-8433-39dc69d657c5] Running
	I0503 21:32:27.858155   14195 system_pods.go:89] "csi-hostpath-attacher-0" [39767ca6-602c-4ebe-8a7c-2f94cfa40ef0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0503 21:32:27.858162   14195 system_pods.go:89] "csi-hostpath-resizer-0" [fcd35767-f994-4c7f-94a7-60e577529c2f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0503 21:32:27.858171   14195 system_pods.go:89] "csi-hostpathplugin-2pg78" [ecf1b94f-3f96-4eac-8115-e778976a9b8b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0503 21:32:27.858178   14195 system_pods.go:89] "etcd-addons-146858" [f0f1a505-493a-4f5a-be9b-0e6689a13caf] Running
	I0503 21:32:27.858183   14195 system_pods.go:89] "kube-apiserver-addons-146858" [3b0d2aaf-9c05-4e1c-8aa0-9d66bb2988a0] Running
	I0503 21:32:27.858188   14195 system_pods.go:89] "kube-controller-manager-addons-146858" [4c3f5351-0081-4f14-89cf-eb479ffdf61e] Running
	I0503 21:32:27.858196   14195 system_pods.go:89] "kube-ingress-dns-minikube" [695241dc-f6c2-4bf2-9ff6-5cc2007acb6e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0503 21:32:27.858201   14195 system_pods.go:89] "kube-proxy-tx6v2" [75b06b54-37ea-4b9e-a112-aec8ef406682] Running
	I0503 21:32:27.858205   14195 system_pods.go:89] "kube-scheduler-addons-146858" [f9215d98-b7aa-4a44-8360-28c12eddd16e] Running
	I0503 21:32:27.858214   14195 system_pods.go:89] "metrics-server-c59844bb4-sntqz" [10c0ce9e-46a5-44f6-b11e-c88357ae3a30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0503 21:32:27.858219   14195 system_pods.go:89] "nvidia-device-plugin-daemonset-mwfx8" [927e681f-9b2a-492e-9276-e9b8f9d5e724] Running
	I0503 21:32:27.858227   14195 system_pods.go:89] "registry-ngbjf" [64997a94-617f-469b-9123-d13774652b03] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0503 21:32:27.858233   14195 system_pods.go:89] "registry-proxy-lmkbf" [f7405a25-3cc5-4e99-a9b2-e79b705f75a9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0503 21:32:27.858241   14195 system_pods.go:89] "snapshot-controller-745499f584-7ks4n" [7037e023-5263-4a36-80b4-25709e24cc2c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0503 21:32:27.858249   14195 system_pods.go:89] "snapshot-controller-745499f584-lxzzh" [6f7a1e48-4cb4-4f9a-b268-1db5dc478f0d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0503 21:32:27.858257   14195 system_pods.go:89] "storage-provisioner" [4eb93a82-c2db-4910-b4fc-9de189ef3eb6] Running
	I0503 21:32:27.858262   14195 system_pods.go:89] "tiller-deploy-6677d64bcd-6tlnr" [6659d916-b68d-4378-84d0-76c5fd93ba89] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0503 21:32:27.858270   14195 system_pods.go:126] duration metric: took 9.134312ms to wait for k8s-apps to be running ...
	I0503 21:32:27.858279   14195 system_svc.go:44] waiting for kubelet service to be running ....
	I0503 21:32:27.858325   14195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0503 21:32:27.875034   14195 system_svc.go:56] duration metric: took 16.752329ms WaitForService to wait for kubelet
	I0503 21:32:27.875049   14195 kubeadm.go:576] duration metric: took 19.200971756s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0503 21:32:27.875080   14195 node_conditions.go:102] verifying NodePressure condition ...
	I0503 21:32:27.878180   14195 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0503 21:32:27.878208   14195 node_conditions.go:123] node cpu capacity is 2
	I0503 21:32:27.878217   14195 node_conditions.go:105] duration metric: took 3.128966ms to run NodePressure ...
	I0503 21:32:27.878227   14195 start.go:240] waiting for startup goroutines ...
	I0503 21:32:28.033983   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:28.136016   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:28.136274   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:28.180179   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:28.536146   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:28.636449   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:28.636613   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:28.680434   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:29.033323   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:29.136146   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:29.137979   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:29.182137   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:29.535946   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:29.638056   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:29.638154   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:29.680554   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:30.033965   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:30.136055   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:30.137455   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:30.181028   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:30.534518   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:30.637008   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:30.637703   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:30.680102   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:31.034050   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:31.136404   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:31.137162   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:31.179714   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:31.533162   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:31.636485   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:31.637130   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:31.680777   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:32.034761   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:32.137144   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:32.138395   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:32.179741   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:32.533747   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:32.635301   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:32.635723   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:32.680894   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:33.034787   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:33.136558   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:33.136825   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:33.180397   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:33.532952   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:33.639013   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:33.647313   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:33.680032   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:34.060705   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:34.337047   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:34.338259   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:34.339635   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:34.563820   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:34.639383   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:34.642347   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:34.680769   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:35.033629   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:35.134948   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:35.135323   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:35.180118   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:35.533186   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:35.636333   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:35.638574   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:35.680752   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:36.034355   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:36.136629   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:36.137012   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:36.179824   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:36.533527   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:36.638805   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:36.639125   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:36.685488   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:37.035165   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:37.136302   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:37.138109   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:37.180776   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:37.533940   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:37.635539   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:37.636445   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:37.679989   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:38.034296   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:38.135818   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:38.136393   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:38.181211   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:38.535993   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:38.638110   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:38.638539   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:38.680914   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:39.036733   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:39.138621   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:39.138822   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:39.181191   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:39.534005   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:39.637877   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:39.638110   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:39.680042   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:40.033936   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:40.136739   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:40.139459   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:40.181218   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:40.536322   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:40.637170   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:40.638827   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:40.682080   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:41.033176   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:41.135336   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:41.137133   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:41.180505   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:41.534269   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:41.638419   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:41.638529   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:41.679759   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:42.033453   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:42.137361   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:42.138178   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:42.179530   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:42.535023   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:42.636540   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:42.637315   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:42.680543   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:43.036793   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:43.147449   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:43.147865   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:43.188844   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:43.534647   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:43.636927   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:43.637552   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:43.682277   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:44.033336   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:44.137352   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:44.142487   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:44.185108   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:44.534923   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:44.636257   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:44.638479   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:44.679537   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:45.034393   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:45.137357   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:45.138697   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:45.179783   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:45.538654   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:45.641337   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:45.643215   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:45.681693   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:46.034547   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:46.135158   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:46.136976   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:46.181357   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:46.533600   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:46.636089   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:46.636293   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:46.681574   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:47.033775   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:47.140877   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:47.143441   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:47.179479   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:47.536041   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:47.636087   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:47.640816   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:47.681266   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:48.034117   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:48.137123   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:48.137635   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:48.182987   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:48.533867   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:48.641040   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:48.641516   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:48.681401   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:49.033898   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:49.135938   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:49.136622   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:49.185818   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:49.533224   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:49.636406   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:49.636500   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:49.680515   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:50.034143   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:50.135713   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:50.136351   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:50.182707   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:50.533975   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:50.642715   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:50.647188   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:50.685256   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:51.034457   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:51.136311   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:51.136316   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:51.190797   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:51.534514   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:51.635614   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:51.636255   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:51.682519   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:52.033980   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:52.135183   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:52.136117   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:52.179279   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:52.533349   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:52.636274   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:52.636567   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:52.684257   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:53.033504   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:53.137374   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:53.139692   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:53.181071   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:53.534329   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:53.637699   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:53.638387   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:53.680138   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:54.034038   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:54.136706   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:54.136841   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:54.179767   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:54.534439   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:54.638300   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:54.638418   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:54.679516   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:55.033722   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:55.135004   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:55.136019   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:55.182160   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:55.534587   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:55.636583   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:55.636754   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:55.679054   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:56.037829   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:56.135506   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:56.136173   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:56.180567   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:56.534080   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:56.638232   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:56.638582   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:56.680527   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:57.033561   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:57.135238   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:57.135697   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:57.180049   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:57.533842   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:57.637262   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:57.638423   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:57.684346   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:58.032952   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:58.136701   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:58.136819   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:58.179442   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:58.534354   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:58.636568   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:58.637529   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:58.679382   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:59.033418   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:59.137590   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:59.137864   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:59.183962   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:32:59.533327   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:32:59.637886   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:32:59.638055   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:32:59.679795   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:00.036130   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:00.135702   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:33:00.138752   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:00.181639   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:00.533030   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:00.637795   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:33:00.637807   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:00.679225   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:01.041045   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:01.147549   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:01.149079   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:33:01.184796   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:01.533405   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:01.635847   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:01.636186   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:33:01.690558   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:02.033769   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:02.135263   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:33:02.136972   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:02.180108   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:02.532703   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:02.635609   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:02.635823   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:33:02.679849   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:03.035367   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:03.136034   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:03.136435   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:33:03.184236   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:03.532767   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:03.635380   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:03.635567   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:33:03.679415   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:04.034836   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:04.136295   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:04.144308   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:33:04.187746   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:04.533931   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:04.638792   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:33:04.638930   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:04.683501   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:05.034399   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:05.139208   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:05.142684   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:33:05.179720   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:05.533770   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:05.636188   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:33:05.637351   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:05.683588   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:06.033882   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:06.134769   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:06.135755   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:33:06.179415   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:06.532651   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:06.637114   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:06.638441   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:33:06.679429   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:07.033491   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:07.136924   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:33:07.137734   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:07.183947   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:07.533960   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:07.637168   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:07.637303   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:33:07.680059   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:08.034776   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:08.135897   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:08.136548   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:33:08.182816   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:08.533712   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:08.637211   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:33:08.637603   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:08.681788   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:09.034938   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:09.137214   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:09.138158   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:33:09.180415   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:09.533415   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:09.637399   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:33:09.637534   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:09.680088   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:10.033098   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:10.137034   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:10.143312   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:33:10.180906   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:10.533477   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:10.638625   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:10.638684   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:33:10.698333   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:11.038172   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:11.137559   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:33:11.137709   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:11.180356   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:11.533724   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:11.637044   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:11.637164   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:33:11.680199   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:12.032922   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:12.135617   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:33:12.135847   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:12.181088   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:12.535354   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:12.636827   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:33:12.637094   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:12.680181   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:13.037721   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:13.141518   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:33:13.141968   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:13.186554   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:13.532935   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:13.636471   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:13.638033   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0503 21:33:13.680533   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:14.033561   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:14.137056   14195 kapi.go:107] duration metric: took 55.507731228s to wait for kubernetes.io/minikube-addons=registry ...
	I0503 21:33:14.137122   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:14.180526   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:14.533794   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:14.635688   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:14.683426   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:15.033564   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:15.135696   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:15.180230   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:15.534014   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:15.635655   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:15.680677   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:16.034821   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:16.136631   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:16.184947   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:16.534313   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:16.636237   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:16.680888   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:17.034340   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:17.136202   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:17.350743   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:17.534374   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:17.635855   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:17.680255   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:18.033686   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:18.135629   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:18.188698   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:18.533392   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:18.636452   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:18.688804   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:19.033900   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:19.135436   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:19.180800   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:19.533650   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:19.637554   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:19.683124   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:20.034118   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:20.135506   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:20.180897   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:20.534641   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:20.637307   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:20.681086   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:21.034753   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:21.135224   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:21.183189   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:21.535373   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:21.635936   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:21.679755   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:22.033370   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:22.137400   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:22.193768   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:22.785256   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:22.790021   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:22.791597   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:23.034922   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:23.138333   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:23.180785   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:23.533407   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:23.635715   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:23.680478   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:24.034849   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:24.135029   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:24.180777   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:24.534167   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:24.635825   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:24.681387   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:25.041118   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:25.167022   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:25.187285   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:25.534643   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:26.115049   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:26.117819   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:26.118142   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:26.141635   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:26.178696   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:26.534057   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:26.635760   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:26.681866   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:27.034981   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:27.150408   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:27.183212   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:27.533730   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:27.635153   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:27.680436   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:28.037275   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:28.137137   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:28.180248   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:28.533169   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:28.636745   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:28.683856   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:29.034226   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:29.136584   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:29.183424   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:29.533470   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:29.636569   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:29.679487   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:30.035853   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:30.150244   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:30.200527   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:30.533625   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:30.637234   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:30.679433   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:31.035108   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:31.135125   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:31.180842   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:31.537772   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:31.635728   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:31.679825   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:32.057416   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:32.135825   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:32.187200   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:32.533515   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:32.638713   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:32.681206   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:33.033252   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:33.160051   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:33.179518   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:33.533816   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:33.637110   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:33.684560   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:34.033120   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:34.135380   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:34.189493   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:34.536851   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:34.635408   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:34.687561   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:35.041418   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:35.135896   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:35.184535   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:35.533618   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:35.636085   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:35.679667   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:36.033680   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:36.134981   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:36.184564   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:36.534317   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:36.636431   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:36.683126   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:37.034237   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:37.135522   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:37.180166   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:37.532840   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:37.637342   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:37.682909   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:38.038738   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:38.135503   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:38.180082   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:38.533877   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:38.635341   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:38.680502   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:39.035736   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:39.134875   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:39.188517   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:39.534036   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:39.636615   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:39.680996   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0503 21:33:40.033993   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:40.134960   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:40.180550   14195 kapi.go:107] duration metric: took 1m20.506518857s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0503 21:33:40.533237   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:40.635594   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:41.035432   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:41.136266   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:41.533138   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:41.635751   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:42.033939   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:42.136815   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:42.533891   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:42.636267   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:43.033560   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:43.136460   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:43.533570   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:43.635498   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:44.034101   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:44.137935   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:44.534079   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:44.635268   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:45.033530   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:45.136463   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:45.536188   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:45.654256   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:46.033538   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:46.136035   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:46.534346   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:46.636296   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:47.033732   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:47.136764   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:47.534493   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:47.635595   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:48.034271   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:48.136233   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:48.535752   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:48.637653   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:49.033980   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:49.139879   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:49.539243   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:49.636813   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:50.033813   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:50.135069   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:50.533453   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:50.636505   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:51.034473   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:51.136710   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:51.534035   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:51.640097   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:52.033835   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:52.135317   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:52.533440   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:52.636588   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:53.034184   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:53.135321   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:53.533961   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:53.635047   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:54.034089   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:54.135685   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:54.534920   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:54.635607   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:55.034532   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:55.137156   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:55.533031   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:55.635571   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:56.033996   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:56.135578   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:56.533745   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:56.635829   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:57.034064   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:57.135784   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:57.534468   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:57.636282   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:58.035174   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:58.136876   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:58.533917   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:58.636122   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:59.034069   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:59.135909   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:33:59.533638   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:33:59.636777   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:00.034402   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:00.135890   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:00.534056   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:00.635913   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:01.033354   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:01.136337   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:01.533592   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:01.638938   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:02.034044   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:02.135524   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:02.533738   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:02.634819   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:03.034706   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:03.136188   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:03.535713   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:03.634820   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:04.034226   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:04.136024   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:04.533835   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:04.637906   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:05.041623   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:05.135843   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:05.535745   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:05.635355   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:06.036320   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:06.136200   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:06.533998   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:06.636144   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:07.033864   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:07.136546   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:07.533749   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:07.635384   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:08.033498   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:08.135798   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:08.534084   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:08.635236   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:09.033830   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:09.135602   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:09.533585   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:09.635919   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:10.034365   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:10.136856   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:10.535681   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:10.638680   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:11.034022   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:11.135992   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:11.536117   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:11.635591   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:12.034044   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:12.136390   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:12.533555   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:12.636924   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:13.034468   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:13.135982   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:13.533348   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:13.637088   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:14.033406   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:14.137375   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:14.533838   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:14.635278   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:15.033664   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:15.135852   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:15.534056   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:15.635558   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:16.034908   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:16.135885   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:16.534316   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:16.636957   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:17.033413   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:17.138429   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:17.536354   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:17.636527   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:18.034506   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:18.138982   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:18.534192   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:18.635519   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:19.033922   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:19.135320   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:19.534781   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:19.635847   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:20.034391   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:20.135762   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:20.534741   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:20.635213   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:21.033391   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:21.136635   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:21.534759   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:21.635825   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:22.034017   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:22.135963   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:22.533914   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:22.636814   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:23.033570   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:23.139877   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:23.533948   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:23.636802   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:24.033982   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:24.135791   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:24.534656   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:24.636406   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:25.034959   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:25.135316   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:25.533953   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:25.635593   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:26.033740   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:26.135076   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:26.534553   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:26.636092   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:27.033148   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:27.136174   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:27.533255   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:27.635455   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:28.034163   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:28.136079   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:28.533568   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:28.636301   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:29.033847   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:29.135232   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:29.533748   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:29.635202   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:30.034063   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:30.136682   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:30.533954   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:30.635770   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:31.033745   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:31.135532   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:31.533531   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:31.636124   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:32.033202   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:32.135819   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:32.534200   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:32.635446   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:33.034314   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:33.135596   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:33.533525   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:33.636087   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:34.033676   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:34.135181   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:34.533554   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:34.639292   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:35.033493   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:35.141060   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:35.533238   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:35.635649   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:36.033297   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:36.135487   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:36.532738   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:36.635971   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:37.033375   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:37.135772   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:37.535768   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:37.636978   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:38.034668   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:38.135089   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:38.533756   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:38.635014   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:39.034361   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:39.135857   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:39.533978   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:39.634958   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:40.033655   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:40.135532   14195 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0503 21:34:40.534596   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:40.635914   14195 kapi.go:107] duration metric: took 2m22.009293531s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0503 21:34:41.033085   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:41.533401   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:42.033383   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:42.533145   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:43.034396   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:43.533827   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:44.036779   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:44.535619   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:45.033545   14195 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0503 21:34:45.532917   14195 kapi.go:107] duration metric: took 2m24.003319121s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0503 21:34:45.534662   14195 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-146858 cluster.
	I0503 21:34:45.536313   14195 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0503 21:34:45.537808   14195 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0503 21:34:45.539308   14195 out.go:177] * Enabled addons: nvidia-device-plugin, default-storageclass, storage-provisioner, cloud-spanner, ingress-dns, helm-tiller, metrics-server, yakd, inspektor-gadget, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0503 21:34:45.540716   14195 addons.go:505] duration metric: took 2m36.866607764s for enable addons: enabled=[nvidia-device-plugin default-storageclass storage-provisioner cloud-spanner ingress-dns helm-tiller metrics-server yakd inspektor-gadget storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0503 21:34:45.540751   14195 start.go:245] waiting for cluster config update ...
	I0503 21:34:45.540766   14195 start.go:254] writing updated cluster config ...
	I0503 21:34:45.540998   14195 ssh_runner.go:195] Run: rm -f paused
	I0503 21:34:45.594624   14195 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0503 21:34:45.596748   14195 out.go:177] * Done! kubectl is now configured to use "addons-146858" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	efde462be0c1c       f62daa0d2c724       2 seconds ago        Exited              busybox                                  0                   4de1715083f27       test-local-path
	41c9825e5fc6e       db2fc13d44d50       28 seconds ago       Running             gcp-auth                                 0                   12de60a1b965b       gcp-auth-5db96cd9b4-5287b
	f124d5d62413b       ee54966f3891d       32 seconds ago       Running             controller                               0                   7c7d0af70e62c       ingress-nginx-controller-768f948f8f-f5x4f
	1233f167075e1       738351fd438f0       About a minute ago   Running             csi-snapshotter                          0                   e779eb013a2fe       csi-hostpathplugin-2pg78
	a911c81323b67       931dbfd16f87c       About a minute ago   Running             csi-provisioner                          0                   e779eb013a2fe       csi-hostpathplugin-2pg78
	b7d5278a2f467       e899260153aed       About a minute ago   Running             liveness-probe                           0                   e779eb013a2fe       csi-hostpathplugin-2pg78
	b02202cc339e9       e255e073c508c       About a minute ago   Running             hostpath                                 0                   e779eb013a2fe       csi-hostpathplugin-2pg78
	c189f253af320       88ef14a257f42       About a minute ago   Running             node-driver-registrar                    0                   e779eb013a2fe       csi-hostpathplugin-2pg78
	7bbc5fe4f7c33       19a639eda60f0       About a minute ago   Running             csi-resizer                              0                   66af517d249a0       csi-hostpath-resizer-0
	86b061197c5ae       a1ed5895ba635       About a minute ago   Running             csi-external-health-monitor-controller   0                   e779eb013a2fe       csi-hostpathplugin-2pg78
	a85ae23330e5e       59cbb42146a37       About a minute ago   Running             csi-attacher                             0                   dc6ada6af6a2c       csi-hostpath-attacher-0
	206062c1eb624       684c5ea3b61b2       About a minute ago   Exited              patch                                    1                   498d10a313509       ingress-nginx-admission-patch-md55m
	1f1c4b1a02366       684c5ea3b61b2       About a minute ago   Exited              create                                   0                   22efa69187746       ingress-nginx-admission-create-msmsl
	bb20a8a45883a       aa61ee9c70bc4       About a minute ago   Running             volume-snapshot-controller               0                   dde4499a23a80       snapshot-controller-745499f584-lxzzh
	4d394e7c37f7e       aa61ee9c70bc4       About a minute ago   Running             volume-snapshot-controller               0                   f505c17467295       snapshot-controller-745499f584-7ks4n
	6e818e5074df0       e16d1e3a10667       2 minutes ago        Running             local-path-provisioner                   0                   92f23af0b53dd       local-path-provisioner-8d985888d-9fxdr
	9b7f2442dbf1f       31de47c733c91       2 minutes ago        Running             yakd                                     0                   80bcdb110f43b       yakd-dashboard-5ddbf7d777-xbw8h
	1fffe0f8196af       3f39089e90831       2 minutes ago        Running             tiller                                   0                   0c9c47fefe8a6       tiller-deploy-6677d64bcd-6tlnr
	b1cc0e651fa0f       1499ed4fbd0aa       2 minutes ago        Running             minikube-ingress-dns                     0                   d1517414c34dc       kube-ingress-dns-minikube
	cd6409b1ba3cd       5d192b519c227       2 minutes ago        Running             cloud-spanner-emulator                   0                   3e581e17c21d6       cloud-spanner-emulator-6dc8d859f6-52lb5
	3ff996d58c018       6e38f40d628db       2 minutes ago        Running             storage-provisioner                      0                   61d26e634a0cd       storage-provisioner
	6f5b5676dfad3       cbb01a7bd410d       3 minutes ago        Running             coredns                                  0                   0761f8786be60       coredns-7db6d8ff4d-45l5x
	4cedeb4666702       a0bf559e280cf       3 minutes ago        Running             kube-proxy                               0                   f371df5965230       kube-proxy-tx6v2
	be5c6e8034bc0       c7aad43836fa5       3 minutes ago        Running             kube-controller-manager                  0                   884deb78bac1a       kube-controller-manager-addons-146858
	e5f7463a362b7       3861cfcd7c04c       3 minutes ago        Running             etcd                                     0                   7c7b1857f5662       etcd-addons-146858
	cd7a346eb205c       259c8277fcbbc       3 minutes ago        Running             kube-scheduler                           0                   5fa151cf647e0       kube-scheduler-addons-146858
	56018d31e5460       c42f13656d0b2       3 minutes ago        Running             kube-apiserver                           0                   8b644c67c9e81       kube-apiserver-addons-146858
	
	
	==> containerd <==
	May 03 21:35:10 addons-146858 containerd[654]: time="2024-05-03T21:35:10.638525131Z" level=info msg="StartContainer for \"efde462be0c1c66e49a316ee2a43bf6ba8cf96194f9ca810a1da4bd009304eb3\" returns successfully"
	May 03 21:35:10 addons-146858 containerd[654]: time="2024-05-03T21:35:10.693684737Z" level=info msg="shim disconnected" id=efde462be0c1c66e49a316ee2a43bf6ba8cf96194f9ca810a1da4bd009304eb3 namespace=k8s.io
	May 03 21:35:10 addons-146858 containerd[654]: time="2024-05-03T21:35:10.693784378Z" level=warning msg="cleaning up after shim disconnected" id=efde462be0c1c66e49a316ee2a43bf6ba8cf96194f9ca810a1da4bd009304eb3 namespace=k8s.io
	May 03 21:35:10 addons-146858 containerd[654]: time="2024-05-03T21:35:10.693795711Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	May 03 21:35:11 addons-146858 containerd[654]: time="2024-05-03T21:35:11.202386723Z" level=info msg="StopContainer for \"6e2a9de01324a1b681e4b311d6f6ae87cbf3da835a5433e4be10f47702cb869e\" with timeout 30 (s)"
	May 03 21:35:11 addons-146858 containerd[654]: time="2024-05-03T21:35:11.203379983Z" level=info msg="Stop container \"6e2a9de01324a1b681e4b311d6f6ae87cbf3da835a5433e4be10f47702cb869e\" with signal quit"
	May 03 21:35:11 addons-146858 containerd[654]: time="2024-05-03T21:35:11.272944519Z" level=info msg="shim disconnected" id=6e2a9de01324a1b681e4b311d6f6ae87cbf3da835a5433e4be10f47702cb869e namespace=k8s.io
	May 03 21:35:11 addons-146858 containerd[654]: time="2024-05-03T21:35:11.273015630Z" level=warning msg="cleaning up after shim disconnected" id=6e2a9de01324a1b681e4b311d6f6ae87cbf3da835a5433e4be10f47702cb869e namespace=k8s.io
	May 03 21:35:11 addons-146858 containerd[654]: time="2024-05-03T21:35:11.273030143Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	May 03 21:35:11 addons-146858 containerd[654]: time="2024-05-03T21:35:11.313013635Z" level=info msg="StopContainer for \"6e2a9de01324a1b681e4b311d6f6ae87cbf3da835a5433e4be10f47702cb869e\" returns successfully"
	May 03 21:35:11 addons-146858 containerd[654]: time="2024-05-03T21:35:11.313683886Z" level=info msg="StopPodSandbox for \"e914f04c12efe3e60e0afd69d909e54328b4ae9c34ec46abe7a2323f3a6f6fbf\""
	May 03 21:35:11 addons-146858 containerd[654]: time="2024-05-03T21:35:11.313783167Z" level=info msg="Container to stop \"6e2a9de01324a1b681e4b311d6f6ae87cbf3da835a5433e4be10f47702cb869e\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	May 03 21:35:11 addons-146858 containerd[654]: time="2024-05-03T21:35:11.376763967Z" level=info msg="shim disconnected" id=e914f04c12efe3e60e0afd69d909e54328b4ae9c34ec46abe7a2323f3a6f6fbf namespace=k8s.io
	May 03 21:35:11 addons-146858 containerd[654]: time="2024-05-03T21:35:11.376838438Z" level=warning msg="cleaning up after shim disconnected" id=e914f04c12efe3e60e0afd69d909e54328b4ae9c34ec46abe7a2323f3a6f6fbf namespace=k8s.io
	May 03 21:35:11 addons-146858 containerd[654]: time="2024-05-03T21:35:11.376851412Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	May 03 21:35:11 addons-146858 containerd[654]: time="2024-05-03T21:35:11.480839241Z" level=info msg="TearDown network for sandbox \"e914f04c12efe3e60e0afd69d909e54328b4ae9c34ec46abe7a2323f3a6f6fbf\" successfully"
	May 03 21:35:11 addons-146858 containerd[654]: time="2024-05-03T21:35:11.481005287Z" level=info msg="StopPodSandbox for \"e914f04c12efe3e60e0afd69d909e54328b4ae9c34ec46abe7a2323f3a6f6fbf\" returns successfully"
	May 03 21:35:11 addons-146858 containerd[654]: time="2024-05-03T21:35:11.520323884Z" level=info msg="RemoveContainer for \"6e2a9de01324a1b681e4b311d6f6ae87cbf3da835a5433e4be10f47702cb869e\""
	May 03 21:35:11 addons-146858 containerd[654]: time="2024-05-03T21:35:11.544062860Z" level=info msg="RemoveContainer for \"6e2a9de01324a1b681e4b311d6f6ae87cbf3da835a5433e4be10f47702cb869e\" returns successfully"
	May 03 21:35:11 addons-146858 containerd[654]: time="2024-05-03T21:35:11.547366303Z" level=error msg="ContainerStatus for \"6e2a9de01324a1b681e4b311d6f6ae87cbf3da835a5433e4be10f47702cb869e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6e2a9de01324a1b681e4b311d6f6ae87cbf3da835a5433e4be10f47702cb869e\": not found"
	May 03 21:35:12 addons-146858 containerd[654]: time="2024-05-03T21:35:12.547514933Z" level=info msg="StopPodSandbox for \"4de1715083f277361fde10aa9c81b3148aa16d5f4662aa9103fb737a3ae89f92\""
	May 03 21:35:12 addons-146858 containerd[654]: time="2024-05-03T21:35:12.548394391Z" level=info msg="Container to stop \"efde462be0c1c66e49a316ee2a43bf6ba8cf96194f9ca810a1da4bd009304eb3\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	May 03 21:35:12 addons-146858 containerd[654]: time="2024-05-03T21:35:12.667884274Z" level=info msg="shim disconnected" id=4de1715083f277361fde10aa9c81b3148aa16d5f4662aa9103fb737a3ae89f92 namespace=k8s.io
	May 03 21:35:12 addons-146858 containerd[654]: time="2024-05-03T21:35:12.668036938Z" level=warning msg="cleaning up after shim disconnected" id=4de1715083f277361fde10aa9c81b3148aa16d5f4662aa9103fb737a3ae89f92 namespace=k8s.io
	May 03 21:35:12 addons-146858 containerd[654]: time="2024-05-03T21:35:12.668049030Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	
	
	==> coredns [6f5b5676dfad3444e69a7d5df30bdfcfd85604cc8c0067b79c31e438eff8f935] <==
	[INFO] 10.244.0.7:33213 - 34737 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.001832896s
	[INFO] 10.244.0.7:47184 - 64204 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000135364s
	[INFO] 10.244.0.7:47184 - 41426 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000064622s
	[INFO] 10.244.0.7:54024 - 9010 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000184563s
	[INFO] 10.244.0.7:54024 - 14900 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000381172s
	[INFO] 10.244.0.7:47979 - 63630 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000843681s
	[INFO] 10.244.0.7:47979 - 7816 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.001046615s
	[INFO] 10.244.0.7:42003 - 8594 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000088321s
	[INFO] 10.244.0.7:42003 - 42911 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000228858s
	[INFO] 10.244.0.7:54069 - 14637 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000231722s
	[INFO] 10.244.0.7:54069 - 18257 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000027838s
	[INFO] 10.244.0.7:46990 - 41740 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000123201s
	[INFO] 10.244.0.7:46990 - 54798 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000284466s
	[INFO] 10.244.0.7:56427 - 25517 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0000781s
	[INFO] 10.244.0.7:56427 - 43691 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000213975s
	[INFO] 10.244.0.22:56099 - 56429 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000553289s
	[INFO] 10.244.0.22:60785 - 33345 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000144192s
	[INFO] 10.244.0.22:36251 - 1388 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000114558s
	[INFO] 10.244.0.22:42004 - 38418 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000128564s
	[INFO] 10.244.0.22:37272 - 57641 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000077786s
	[INFO] 10.244.0.22:56536 - 17055 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000135161s
	[INFO] 10.244.0.22:48600 - 20434 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000873934s
	[INFO] 10.244.0.22:44916 - 44538 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 230 0.000655034s
	[INFO] 10.244.0.25:58424 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000329261s
	[INFO] 10.244.0.25:53094 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00027673s
	
	
	==> describe nodes <==
	Name:               addons-146858
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-146858
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cc00050a34cebd4ea4e95f76540d25d17abab09a
	                    minikube.k8s.io/name=addons-146858
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_03T21_31_55_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-146858
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-146858"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 03 May 2024 21:31:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-146858
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 03 May 2024 21:35:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 03 May 2024 21:34:59 +0000   Fri, 03 May 2024 21:31:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 03 May 2024 21:34:59 +0000   Fri, 03 May 2024 21:31:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 03 May 2024 21:34:59 +0000   Fri, 03 May 2024 21:31:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 03 May 2024 21:34:59 +0000   Fri, 03 May 2024 21:31:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.58
	  Hostname:    addons-146858
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 8d368635f4bd47afbff5f9a4000a7968
	  System UUID:                8d368635-f4bd-47af-bff5-f9a4000a7968
	  Boot ID:                    ccd0fe36-2fa4-4280-9a5d-e33a831f6fee
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.15
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (19 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6dc8d859f6-52lb5      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m1s
	  gcp-auth                    gcp-auth-5db96cd9b4-5287b                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m52s
	  ingress-nginx               ingress-nginx-controller-768f948f8f-f5x4f    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         2m55s
	  kube-system                 coredns-7db6d8ff4d-45l5x                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m5s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m54s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m54s
	  kube-system                 csi-hostpathplugin-2pg78                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m54s
	  kube-system                 etcd-addons-146858                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3m18s
	  kube-system                 kube-apiserver-addons-146858                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m20s
	  kube-system                 kube-controller-manager-addons-146858        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m18s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m
	  kube-system                 kube-proxy-tx6v2                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m5s
	  kube-system                 kube-scheduler-addons-146858                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m18s
	  kube-system                 snapshot-controller-745499f584-7ks4n         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m56s
	  kube-system                 snapshot-controller-745499f584-lxzzh         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m56s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m59s
	  kube-system                 tiller-deploy-6677d64bcd-6tlnr               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m59s
	  local-path-storage          local-path-provisioner-8d985888d-9fxdr       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m58s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-xbw8h              0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     2m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             388Mi (10%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m3s   kube-proxy       
	  Normal  Starting                 3m18s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m18s  kubelet          Node addons-146858 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m18s  kubelet          Node addons-146858 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m18s  kubelet          Node addons-146858 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m18s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m18s  kubelet          Node addons-146858 status is now: NodeReady
	  Normal  RegisteredNode           3m5s   node-controller  Node addons-146858 event: Registered Node addons-146858 in Controller
	
	
	==> dmesg <==
	[  +0.061904] kauditd_printk_skb: 158 callbacks suppressed
	[  +0.611601] systemd-fstab-generator[695]: Ignoring "noauto" option for root device
	[  +4.025731] systemd-fstab-generator[859]: Ignoring "noauto" option for root device
	[  +0.622948] kauditd_printk_skb: 74 callbacks suppressed
	[  +5.937144] systemd-fstab-generator[1226]: Ignoring "noauto" option for root device
	[  +0.069739] kauditd_printk_skb: 41 callbacks suppressed
	[May 3 21:32] systemd-fstab-generator[1424]: Ignoring "noauto" option for root device
	[  +0.164399] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.453099] kauditd_printk_skb: 117 callbacks suppressed
	[  +5.031122] kauditd_printk_skb: 128 callbacks suppressed
	[  +7.604077] kauditd_printk_skb: 91 callbacks suppressed
	[ +22.748796] kauditd_printk_skb: 4 callbacks suppressed
	[May 3 21:33] kauditd_printk_skb: 6 callbacks suppressed
	[ +11.602869] kauditd_printk_skb: 2 callbacks suppressed
	[ +11.472750] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.661951] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.344644] kauditd_printk_skb: 72 callbacks suppressed
	[ +13.010118] kauditd_printk_skb: 12 callbacks suppressed
	[May 3 21:34] kauditd_printk_skb: 24 callbacks suppressed
	[ +12.027086] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.021938] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.461500] kauditd_printk_skb: 11 callbacks suppressed
	[ +13.726103] kauditd_printk_skb: 48 callbacks suppressed
	[May 3 21:35] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.018108] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [e5f7463a362b733d50da47fdf1a17ee01670c963c8ccbc76596f16fb32f3fb53] <==
	{"level":"warn","ts":"2024-05-03T21:33:22.759478Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"239.078809ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11161"}
	{"level":"info","ts":"2024-05-03T21:33:22.759502Z","caller":"traceutil/trace.go:171","msg":"trace[1502081960] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1017; }","duration":"239.126051ms","start":"2024-05-03T21:33:22.52037Z","end":"2024-05-03T21:33:22.759496Z","steps":["trace[1502081960] 'agreement among raft nodes before linearized reading'  (duration: 239.012061ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-03T21:33:22.762677Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.153786ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14071"}
	{"level":"info","ts":"2024-05-03T21:33:22.76271Z","caller":"traceutil/trace.go:171","msg":"trace[523592547] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1017; }","duration":"142.213486ms","start":"2024-05-03T21:33:22.620488Z","end":"2024-05-03T21:33:22.762702Z","steps":["trace[523592547] 'agreement among raft nodes before linearized reading'  (duration: 142.017524ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-03T21:33:26.098565Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"477.135041ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14071"}
	{"level":"info","ts":"2024-05-03T21:33:26.098617Z","caller":"traceutil/trace.go:171","msg":"trace[1409187409] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1028; }","duration":"477.217937ms","start":"2024-05-03T21:33:25.621387Z","end":"2024-05-03T21:33:26.098605Z","steps":["trace[1409187409] 'range keys from in-memory index tree'  (duration: 477.055802ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-03T21:33:26.098653Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-03T21:33:25.621374Z","time spent":"477.272396ms","remote":"127.0.0.1:36526","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":14095,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"warn","ts":"2024-05-03T21:33:26.098907Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"434.275457ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85842"}
	{"level":"info","ts":"2024-05-03T21:33:26.098933Z","caller":"traceutil/trace.go:171","msg":"trace[414410115] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1028; }","duration":"434.330825ms","start":"2024-05-03T21:33:25.664595Z","end":"2024-05-03T21:33:26.098926Z","steps":["trace[414410115] 'range keys from in-memory index tree'  (duration: 434.075469ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-03T21:33:26.098987Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-03T21:33:25.664581Z","time spent":"434.399671ms","remote":"127.0.0.1:36526","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":18,"response size":85866,"request content":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" "}
	{"level":"info","ts":"2024-05-03T21:33:31.974683Z","caller":"traceutil/trace.go:171","msg":"trace[1034800866] transaction","detail":"{read_only:false; response_revision:1085; number_of_response:1; }","duration":"121.662674ms","start":"2024-05-03T21:33:31.852996Z","end":"2024-05-03T21:33:31.974659Z","steps":["trace[1034800866] 'process raft request'  (duration: 121.506815ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-03T21:33:49.47632Z","caller":"traceutil/trace.go:171","msg":"trace[302981552] transaction","detail":"{read_only:false; response_revision:1163; number_of_response:1; }","duration":"347.7554ms","start":"2024-05-03T21:33:49.128531Z","end":"2024-05-03T21:33:49.476286Z","steps":["trace[302981552] 'process raft request'  (duration: 347.491195ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-03T21:33:49.487366Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-03T21:33:49.128512Z","time spent":"348.291464ms","remote":"127.0.0.1:36526","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":9592,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/gadget/gadget-6gt5q\" mod_revision:1110 > success:<request_put:<key:\"/registry/pods/gadget/gadget-6gt5q\" value_size:9550 >> failure:<request_range:<key:\"/registry/pods/gadget/gadget-6gt5q\" > >"}
	{"level":"info","ts":"2024-05-03T21:34:44.01747Z","caller":"traceutil/trace.go:171","msg":"trace[624240844] transaction","detail":"{read_only:false; response_revision:1268; number_of_response:1; }","duration":"138.73172ms","start":"2024-05-03T21:34:43.878687Z","end":"2024-05-03T21:34:44.017419Z","steps":["trace[624240844] 'process raft request'  (duration: 138.345049ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-03T21:34:53.768838Z","caller":"traceutil/trace.go:171","msg":"trace[1361915226] linearizableReadLoop","detail":"{readStateIndex:1389; appliedIndex:1388; }","duration":"147.473216ms","start":"2024-05-03T21:34:53.621331Z","end":"2024-05-03T21:34:53.768804Z","steps":["trace[1361915226] 'read index received'  (duration: 147.298729ms)","trace[1361915226] 'applied index is now lower than readState.Index'  (duration: 173.935µs)"],"step_count":2}
	{"level":"info","ts":"2024-05-03T21:34:53.769329Z","caller":"traceutil/trace.go:171","msg":"trace[111291572] transaction","detail":"{read_only:false; response_revision:1333; number_of_response:1; }","duration":"164.700502ms","start":"2024-05-03T21:34:53.60461Z","end":"2024-05-03T21:34:53.769311Z","steps":["trace[111291572] 'process raft request'  (duration: 164.067518ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-03T21:34:53.77664Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"155.37571ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:86476"}
	{"level":"info","ts":"2024-05-03T21:34:53.776701Z","caller":"traceutil/trace.go:171","msg":"trace[1506586969] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1333; }","duration":"155.6011ms","start":"2024-05-03T21:34:53.621083Z","end":"2024-05-03T21:34:53.776684Z","steps":["trace[1506586969] 'agreement among raft nodes before linearized reading'  (duration: 148.750019ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-03T21:34:53.777514Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.651829ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:86476"}
	{"level":"info","ts":"2024-05-03T21:34:53.777555Z","caller":"traceutil/trace.go:171","msg":"trace[555002527] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1333; }","duration":"109.751638ms","start":"2024-05-03T21:34:53.667794Z","end":"2024-05-03T21:34:53.777546Z","steps":["trace[555002527] 'agreement among raft nodes before linearized reading'  (duration: 103.767223ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-03T21:35:02.254287Z","caller":"traceutil/trace.go:171","msg":"trace[214883612] transaction","detail":"{read_only:false; response_revision:1401; number_of_response:1; }","duration":"113.3593ms","start":"2024-05-03T21:35:02.140859Z","end":"2024-05-03T21:35:02.254218Z","steps":["trace[214883612] 'process raft request'  (duration: 112.857086ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-03T21:35:06.102733Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"184.319528ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:4 size:12974"}
	{"level":"info","ts":"2024-05-03T21:35:06.102861Z","caller":"traceutil/trace.go:171","msg":"trace[486545352] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:1424; }","duration":"184.474111ms","start":"2024-05-03T21:35:05.918368Z","end":"2024-05-03T21:35:06.102842Z","steps":["trace[486545352] 'range keys from in-memory index tree'  (duration: 184.170803ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-03T21:35:06.103082Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.766568ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:4 size:12974"}
	{"level":"info","ts":"2024-05-03T21:35:06.103101Z","caller":"traceutil/trace.go:171","msg":"trace[1206960876] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:1424; }","duration":"114.864033ms","start":"2024-05-03T21:35:05.988231Z","end":"2024-05-03T21:35:06.103095Z","steps":["trace[1206960876] 'range keys from in-memory index tree'  (duration: 114.659028ms)"],"step_count":1}
	
	
	==> gcp-auth [41c9825e5fc6ea17375a6775473e76de916f6a11ce5097116126cf953e7bc88c] <==
	2024/05/03 21:34:44 GCP Auth Webhook started!
	2024/05/03 21:34:51 Ready to marshal response ...
	2024/05/03 21:34:51 Ready to write response ...
	2024/05/03 21:34:51 Ready to marshal response ...
	2024/05/03 21:34:51 Ready to write response ...
	2024/05/03 21:34:55 Ready to marshal response ...
	2024/05/03 21:34:55 Ready to write response ...
	2024/05/03 21:34:56 Ready to marshal response ...
	2024/05/03 21:34:56 Ready to write response ...
	
	
	==> kernel <==
	 21:35:13 up 3 min,  0 users,  load average: 1.35, 1.04, 0.47
	Linux addons-146858 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [56018d31e5460f3327a258b50077e8758121453828a53ac104da346b83e99d8b] <==
	I0503 21:32:17.090704       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0503 21:32:17.090776       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0503 21:32:17.260233       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0503 21:32:17.260339       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0503 21:32:18.100818       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller" clusterIPs={"IPv4":"10.99.140.251"}
	I0503 21:32:18.182210       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller-admission" clusterIPs={"IPv4":"10.96.35.123"}
	I0503 21:32:18.263099       1 controller.go:615] quota admission added evaluator for: jobs.batch
	I0503 21:32:19.245511       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.108.69.14"}
	I0503 21:32:19.267895       1 controller.go:615] quota admission added evaluator for: statefulsets.apps
	I0503 21:32:19.530850       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.100.34.130"}
	I0503 21:32:21.053920       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.97.232.79"}
	E0503 21:32:45.717219       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.218.203:443/apis/metrics.k8s.io/v1beta1: Get "https://10.104.218.203:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.104.218.203:443: connect: connection refused
	W0503 21:32:45.717574       1 handler_proxy.go:93] no RequestInfo found in the context
	E0503 21:32:45.717704       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0503 21:32:45.719481       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.218.203:443/apis/metrics.k8s.io/v1beta1: Get "https://10.104.218.203:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.104.218.203:443: connect: connection refused
	E0503 21:32:45.723652       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.218.203:443/apis/metrics.k8s.io/v1beta1: Get "https://10.104.218.203:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.104.218.203:443: connect: connection refused
	E0503 21:32:45.776505       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.218.203:443/apis/metrics.k8s.io/v1beta1: bad status from https://10.104.218.203:443/apis/metrics.k8s.io/v1beta1: 403
	W0503 21:32:45.777198       1 handler_proxy.go:93] no RequestInfo found in the context
	E0503 21:32:45.777255       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0503 21:32:45.795974       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0503 21:35:05.747984       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0503 21:35:06.792790       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0503 21:35:10.054872       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [be5c6e8034bc06f90bff0111aaededf995be68f104e07f3b794cae818ab39c3c] <==
	I0503 21:33:35.288589       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0503 21:33:35.297171       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0503 21:33:35.299768       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0503 21:33:35.319029       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0503 21:33:35.327478       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0503 21:34:05.022829       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0503 21:34:05.026388       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0503 21:34:05.079500       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0503 21:34:05.079565       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0503 21:34:40.315884       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="219.261µs"
	I0503 21:34:45.364543       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="20.532612ms"
	I0503 21:34:45.364795       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="128.042µs"
	I0503 21:34:52.375574       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="5.158µs"
	I0503 21:34:54.203447       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="17.33436ms"
	I0503 21:34:54.203831       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="46.702µs"
	E0503 21:35:06.794863       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	W0503 21:35:08.050545       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0503 21:35:08.050596       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0503 21:35:08.602970       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0503 21:35:08.603040       1 shared_informer.go:320] Caches are synced for resource quota
	I0503 21:35:09.148395       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0503 21:35:09.148451       1 shared_informer.go:320] Caches are synced for garbage collector
	I0503 21:35:09.271797       1 replica_set.go:676] "Finished syncing" logger="replicationcontroller-controller" kind="ReplicationController" key="kube-system/registry" duration="11.957µs"
	W0503 21:35:10.093979       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0503 21:35:10.094047       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [4cedeb46667029b5523175b78f462ee2ef49e68bc455195cc8d74191a1ac9e1e] <==
	I0503 21:32:09.409193       1 server_linux.go:69] "Using iptables proxy"
	I0503 21:32:09.428525       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.58"]
	I0503 21:32:09.521965       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0503 21:32:09.522035       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0503 21:32:09.522052       1 server_linux.go:165] "Using iptables Proxier"
	I0503 21:32:09.525978       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0503 21:32:09.526267       1 server.go:872] "Version info" version="v1.30.0"
	I0503 21:32:09.526475       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0503 21:32:09.527396       1 config.go:192] "Starting service config controller"
	I0503 21:32:09.527439       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0503 21:32:09.527474       1 config.go:101] "Starting endpoint slice config controller"
	I0503 21:32:09.527478       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0503 21:32:09.527872       1 config.go:319] "Starting node config controller"
	I0503 21:32:09.527910       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0503 21:32:09.629257       1 shared_informer.go:320] Caches are synced for node config
	I0503 21:32:09.629310       1 shared_informer.go:320] Caches are synced for service config
	I0503 21:32:09.629338       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [cd7a346eb205cd7ddd9b975aaec3202ddbe0fe84a74075617aeee8e2b67fd983] <==
	W0503 21:31:52.366210       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0503 21:31:52.366257       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0503 21:31:53.189894       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0503 21:31:53.190231       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0503 21:31:53.303737       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0503 21:31:53.304037       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0503 21:31:53.330038       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0503 21:31:53.330460       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0503 21:31:53.339307       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0503 21:31:53.339574       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0503 21:31:53.374065       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0503 21:31:53.374505       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0503 21:31:53.379311       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0503 21:31:53.379680       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0503 21:31:53.425647       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0503 21:31:53.425936       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0503 21:31:53.433706       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0503 21:31:53.433994       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0503 21:31:53.446167       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0503 21:31:53.446772       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0503 21:31:53.654057       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0503 21:31:53.655895       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0503 21:31:53.673996       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0503 21:31:53.674272       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0503 21:31:55.457456       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 03 21:35:13 addons-146858 kubelet[1233]: I0503 21:35:13.070199    1233 reconciler_common.go:289] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9f519b90-209f-4356-b627-e6eb3cf8c941-gcp-creds\") on node \"addons-146858\" DevicePath \"\""
	May 03 21:35:13 addons-146858 kubelet[1233]: I0503 21:35:13.160016    1233 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1f61397-9820-46b6-8669-d79d5af9dd9d" path="/var/lib/kubelet/pods/b1f61397-9820-46b6-8669-d79d5af9dd9d/volumes"
	May 03 21:35:13 addons-146858 kubelet[1233]: I0503 21:35:13.328100    1233 topology_manager.go:215] "Topology Admit Handler" podUID="4bd8c17a-3902-4e6c-865f-9f1adb8864af" podNamespace="default" podName="task-pv-pod-restore"
	May 03 21:35:13 addons-146858 kubelet[1233]: E0503 21:35:13.328358    1233 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5655bb8b-caf2-45b0-be87-0b9c1b8e6ec7" containerName="gadget"
	May 03 21:35:13 addons-146858 kubelet[1233]: E0503 21:35:13.328372    1233 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f7405a25-3cc5-4e99-a9b2-e79b705f75a9" containerName="registry-proxy"
	May 03 21:35:13 addons-146858 kubelet[1233]: E0503 21:35:13.328378    1233 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f3733bf0-8732-4eac-a504-9c72e2d4e0a7" containerName="registry-test"
	May 03 21:35:13 addons-146858 kubelet[1233]: E0503 21:35:13.328386    1233 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9f519b90-209f-4356-b627-e6eb3cf8c941" containerName="busybox"
	May 03 21:35:13 addons-146858 kubelet[1233]: E0503 21:35:13.328392    1233 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="64997a94-617f-469b-9123-d13774652b03" containerName="registry"
	May 03 21:35:13 addons-146858 kubelet[1233]: E0503 21:35:13.328398    1233 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5655bb8b-caf2-45b0-be87-0b9c1b8e6ec7" containerName="gadget"
	May 03 21:35:13 addons-146858 kubelet[1233]: E0503 21:35:13.328404    1233 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b1f61397-9820-46b6-8669-d79d5af9dd9d" containerName="task-pv-container"
	May 03 21:35:13 addons-146858 kubelet[1233]: E0503 21:35:13.328409    1233 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5655bb8b-caf2-45b0-be87-0b9c1b8e6ec7" containerName="gadget"
	May 03 21:35:13 addons-146858 kubelet[1233]: I0503 21:35:13.328446    1233 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1f61397-9820-46b6-8669-d79d5af9dd9d" containerName="task-pv-container"
	May 03 21:35:13 addons-146858 kubelet[1233]: I0503 21:35:13.328454    1233 memory_manager.go:354] "RemoveStaleState removing state" podUID="64997a94-617f-469b-9123-d13774652b03" containerName="registry"
	May 03 21:35:13 addons-146858 kubelet[1233]: I0503 21:35:13.328461    1233 memory_manager.go:354] "RemoveStaleState removing state" podUID="5655bb8b-caf2-45b0-be87-0b9c1b8e6ec7" containerName="gadget"
	May 03 21:35:13 addons-146858 kubelet[1233]: I0503 21:35:13.328466    1233 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7405a25-3cc5-4e99-a9b2-e79b705f75a9" containerName="registry-proxy"
	May 03 21:35:13 addons-146858 kubelet[1233]: I0503 21:35:13.328471    1233 memory_manager.go:354] "RemoveStaleState removing state" podUID="5655bb8b-caf2-45b0-be87-0b9c1b8e6ec7" containerName="gadget"
	May 03 21:35:13 addons-146858 kubelet[1233]: I0503 21:35:13.328476    1233 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3733bf0-8732-4eac-a504-9c72e2d4e0a7" containerName="registry-test"
	May 03 21:35:13 addons-146858 kubelet[1233]: I0503 21:35:13.328481    1233 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f519b90-209f-4356-b627-e6eb3cf8c941" containerName="busybox"
	May 03 21:35:13 addons-146858 kubelet[1233]: I0503 21:35:13.328485    1233 memory_manager.go:354] "RemoveStaleState removing state" podUID="5655bb8b-caf2-45b0-be87-0b9c1b8e6ec7" containerName="gadget"
	May 03 21:35:13 addons-146858 kubelet[1233]: I0503 21:35:13.450203    1233 topology_manager.go:215] "Topology Admit Handler" podUID="728c4732-c12e-400d-83a7-833110e8be43" podNamespace="local-path-storage" podName="helper-pod-delete-pvc-3b189e13-be8e-4d16-acdc-6b0df89705a6"
	May 03 21:35:13 addons-146858 kubelet[1233]: E0503 21:35:13.450306    1233 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5655bb8b-caf2-45b0-be87-0b9c1b8e6ec7" containerName="gadget"
	May 03 21:35:13 addons-146858 kubelet[1233]: I0503 21:35:13.450347    1233 memory_manager.go:354] "RemoveStaleState removing state" podUID="5655bb8b-caf2-45b0-be87-0b9c1b8e6ec7" containerName="gadget"
	May 03 21:35:13 addons-146858 kubelet[1233]: I0503 21:35:13.475375    1233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-44deeace-3cec-40c0-bc08-f94aec76c61d\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^05e9c857-0995-11ef-890c-fee66fd67707\") pod \"task-pv-pod-restore\" (UID: \"4bd8c17a-3902-4e6c-865f-9f1adb8864af\") " pod="default/task-pv-pod-restore"
	May 03 21:35:13 addons-146858 kubelet[1233]: I0503 21:35:13.475517    1233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcn8d\" (UniqueName: \"kubernetes.io/projected/4bd8c17a-3902-4e6c-865f-9f1adb8864af-kube-api-access-jcn8d\") pod \"task-pv-pod-restore\" (UID: \"4bd8c17a-3902-4e6c-865f-9f1adb8864af\") " pod="default/task-pv-pod-restore"
	May 03 21:35:13 addons-146858 kubelet[1233]: I0503 21:35:13.475577    1233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/4bd8c17a-3902-4e6c-865f-9f1adb8864af-gcp-creds\") pod \"task-pv-pod-restore\" (UID: \"4bd8c17a-3902-4e6c-865f-9f1adb8864af\") " pod="default/task-pv-pod-restore"
	
	
	==> storage-provisioner [3ff996d58c018ee2c8f1fcb246aa7cfba2824b2a21a4402239379f26b7e7c50b] <==
	I0503 21:32:15.662969       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0503 21:32:15.707836       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0503 21:32:15.707923       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0503 21:32:15.736467       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0503 21:32:15.736707       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-146858_5f8d5499-3300-426c-9c1f-0d9039d6af88!
	I0503 21:32:15.738084       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a03c58e5-9f2d-412d-abf0-7c9c2900bf95", APIVersion:"v1", ResourceVersion:"591", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-146858_5f8d5499-3300-426c-9c1f-0d9039d6af88 became leader
	I0503 21:32:15.838252       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-146858_5f8d5499-3300-426c-9c1f-0d9039d6af88!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-146858 -n addons-146858
helpers_test.go:261: (dbg) Run:  kubectl --context addons-146858 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: task-pv-pod-restore ingress-nginx-admission-create-msmsl ingress-nginx-admission-patch-md55m helper-pod-delete-pvc-3b189e13-be8e-4d16-acdc-6b0df89705a6
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-146858 describe pod task-pv-pod-restore ingress-nginx-admission-create-msmsl ingress-nginx-admission-patch-md55m helper-pod-delete-pvc-3b189e13-be8e-4d16-acdc-6b0df89705a6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-146858 describe pod task-pv-pod-restore ingress-nginx-admission-create-msmsl ingress-nginx-admission-patch-md55m helper-pod-delete-pvc-3b189e13-be8e-4d16-acdc-6b0df89705a6: exit status 1 (97.856486ms)

                                                
                                                
-- stdout --
	Name:             task-pv-pod-restore
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-146858/192.168.39.58
	Start Time:       Fri, 03 May 2024 21:35:13 +0000
	Labels:           app=task-pv-pod-restore
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jcn8d (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc-restore
	    ReadOnly:   false
	  kube-api-access-jcn8d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  1s    default-scheduler  Successfully assigned default/task-pv-pod-restore to addons-146858
	  Normal  Pulling    0s    kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-msmsl" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-md55m" not found
	Error from server (NotFound): pods "helper-pod-delete-pvc-3b189e13-be8e-4d16-acdc-6b0df89705a6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-146858 describe pod task-pv-pod-restore ingress-nginx-admission-create-msmsl ingress-nginx-admission-patch-md55m helper-pod-delete-pvc-3b189e13-be8e-4d16-acdc-6b0df89705a6: exit status 1
--- FAIL: TestAddons/parallel/Headlamp (3.21s)

                                                
                                    

Test pass (288/325)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 69.21
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.0/json-events 21.68
13 TestDownloadOnly/v1.30.0/preload-exists 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.07
18 TestDownloadOnly/v1.30.0/DeleteAll 0.14
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.57
22 TestOffline 89.67
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 214.44
29 TestAddons/parallel/Registry 23.89
30 TestAddons/parallel/Ingress 22.28
31 TestAddons/parallel/InspektorGadget 11.87
32 TestAddons/parallel/MetricsServer 7.08
33 TestAddons/parallel/HelmTiller 14.54
35 TestAddons/parallel/CSI 46.36
37 TestAddons/parallel/CloudSpanner 5.78
38 TestAddons/parallel/LocalPath 65.75
39 TestAddons/parallel/NvidiaDevicePlugin 6.54
40 TestAddons/parallel/Yakd 6.01
43 TestAddons/serial/GCPAuth/Namespaces 0.11
44 TestAddons/StoppedEnableDisable 92.74
45 TestCertOptions 62.62
46 TestCertExpiration 275.2
48 TestForceSystemdFlag 56.31
49 TestForceSystemdEnv 98.61
51 TestKVMDriverInstallOrUpdate 9.57
55 TestErrorSpam/setup 45.17
56 TestErrorSpam/start 0.36
57 TestErrorSpam/status 0.78
58 TestErrorSpam/pause 1.61
59 TestErrorSpam/unpause 1.66
60 TestErrorSpam/stop 4.48
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 100.03
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 40.34
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.08
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.97
72 TestFunctional/serial/CacheCmd/cache/add_local 3.03
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
74 TestFunctional/serial/CacheCmd/cache/list 0.06
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.87
77 TestFunctional/serial/CacheCmd/cache/delete 0.11
78 TestFunctional/serial/MinikubeKubectlCmd 0.11
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
80 TestFunctional/serial/ExtraConfig 43.78
81 TestFunctional/serial/ComponentHealth 0.06
82 TestFunctional/serial/LogsCmd 1.46
83 TestFunctional/serial/LogsFileCmd 1.52
84 TestFunctional/serial/InvalidService 3.87
86 TestFunctional/parallel/ConfigCmd 0.39
87 TestFunctional/parallel/DashboardCmd 14.81
88 TestFunctional/parallel/DryRun 0.28
89 TestFunctional/parallel/InternationalLanguage 0.14
90 TestFunctional/parallel/StatusCmd 0.81
94 TestFunctional/parallel/ServiceCmdConnect 21.51
95 TestFunctional/parallel/AddonsCmd 0.14
96 TestFunctional/parallel/PersistentVolumeClaim 50.94
98 TestFunctional/parallel/SSHCmd 0.44
99 TestFunctional/parallel/CpCmd 1.33
100 TestFunctional/parallel/MySQL 27.28
101 TestFunctional/parallel/FileSync 0.22
102 TestFunctional/parallel/CertSync 1.36
106 TestFunctional/parallel/NodeLabels 0.06
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.49
110 TestFunctional/parallel/License 0.79
111 TestFunctional/parallel/Version/short 0.09
112 TestFunctional/parallel/Version/components 0.58
113 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
114 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
115 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
116 TestFunctional/parallel/MountCmd/any-port 23.79
126 TestFunctional/parallel/ServiceCmd/DeployApp 8.25
127 TestFunctional/parallel/MountCmd/specific-port 2.01
128 TestFunctional/parallel/MountCmd/VerifyCleanup 1.29
129 TestFunctional/parallel/ProfileCmd/profile_not_create 0.33
130 TestFunctional/parallel/ProfileCmd/profile_list 0.39
131 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
132 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
133 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
134 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
135 TestFunctional/parallel/ImageCommands/ImageBuild 5.09
136 TestFunctional/parallel/ImageCommands/Setup 2.78
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
138 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.87
139 TestFunctional/parallel/ServiceCmd/List 0.51
140 TestFunctional/parallel/ServiceCmd/JSONOutput 1.35
141 TestFunctional/parallel/ServiceCmd/HTTPS 0.47
142 TestFunctional/parallel/ServiceCmd/Format 0.35
143 TestFunctional/parallel/ServiceCmd/URL 0.34
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.15
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.83
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.15
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.57
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.11
150 TestFunctional/delete_addon-resizer_images 0.07
151 TestFunctional/delete_my-image_image 0.02
152 TestFunctional/delete_minikube_cached_images 0.01
156 TestMultiControlPlane/serial/StartCluster 276.59
157 TestMultiControlPlane/serial/DeployApp 6.9
158 TestMultiControlPlane/serial/PingHostFromPods 1.38
159 TestMultiControlPlane/serial/AddWorkerNode 49.61
160 TestMultiControlPlane/serial/NodeLabels 0.06
161 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.56
162 TestMultiControlPlane/serial/CopyFile 13.42
163 TestMultiControlPlane/serial/StopSecondaryNode 93.11
164 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.42
165 TestMultiControlPlane/serial/RestartSecondaryNode 41.18
166 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.55
167 TestMultiControlPlane/serial/RestartClusterKeepsNodes 487.74
168 TestMultiControlPlane/serial/DeleteSecondaryNode 8.03
169 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.39
170 TestMultiControlPlane/serial/StopCluster 275.8
171 TestMultiControlPlane/serial/RestartCluster 159.12
172 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.39
173 TestMultiControlPlane/serial/AddSecondaryNode 76.18
174 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.55
178 TestJSONOutput/start/Command 57.24
179 TestJSONOutput/start/Audit 0
181 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/pause/Command 0.71
185 TestJSONOutput/pause/Audit 0
187 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/unpause/Command 0.68
191 TestJSONOutput/unpause/Audit 0
193 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/stop/Command 2.32
197 TestJSONOutput/stop/Audit 0
199 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
201 TestErrorJSONOutput 0.22
206 TestMainNoArgs 0.05
207 TestMinikubeProfile 95.18
210 TestMountStart/serial/StartWithMountFirst 31.75
211 TestMountStart/serial/VerifyMountFirst 0.39
212 TestMountStart/serial/StartWithMountSecond 31.14
213 TestMountStart/serial/VerifyMountSecond 0.38
214 TestMountStart/serial/DeleteFirst 0.66
215 TestMountStart/serial/VerifyMountPostDelete 0.38
216 TestMountStart/serial/Stop 1.42
217 TestMountStart/serial/RestartStopped 24.17
218 TestMountStart/serial/VerifyMountPostStop 0.39
221 TestMultiNode/serial/FreshStart2Nodes 101.19
222 TestMultiNode/serial/DeployApp2Nodes 6.69
223 TestMultiNode/serial/PingHostFrom2Pods 0.87
224 TestMultiNode/serial/AddNode 43.53
225 TestMultiNode/serial/MultiNodeLabels 0.06
226 TestMultiNode/serial/ProfileList 0.22
227 TestMultiNode/serial/CopyFile 7.31
228 TestMultiNode/serial/StopNode 2.41
229 TestMultiNode/serial/StartAfterStop 24.77
230 TestMultiNode/serial/RestartKeepsNodes 296.63
231 TestMultiNode/serial/DeleteNode 2.26
232 TestMultiNode/serial/StopMultiNode 184.2
233 TestMultiNode/serial/RestartMultiNode 76.4
234 TestMultiNode/serial/ValidateNameConflict 48.95
239 TestPreload 451.78
241 TestScheduledStopUnix 116.81
245 TestRunningBinaryUpgrade 179.09
247 TestKubernetesUpgrade 195.36
250 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
251 TestNoKubernetes/serial/StartWithK8s 128.72
259 TestNetworkPlugins/group/false 3.17
263 TestNoKubernetes/serial/StartWithStopK8s 51.59
264 TestStoppedBinaryUpgrade/Setup 5.93
265 TestStoppedBinaryUpgrade/Upgrade 279.78
266 TestNoKubernetes/serial/Start 30.71
267 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
268 TestNoKubernetes/serial/ProfileList 28.92
269 TestNoKubernetes/serial/Stop 1.63
270 TestNoKubernetes/serial/StartNoArgs 44.47
271 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.25
280 TestPause/serial/Start 142.14
281 TestNetworkPlugins/group/auto/Start 150.24
282 TestStoppedBinaryUpgrade/MinikubeLogs 1.44
283 TestNetworkPlugins/group/kindnet/Start 68.39
284 TestPause/serial/SecondStartNoReconfiguration 56.58
285 TestNetworkPlugins/group/calico/Start 100.13
286 TestNetworkPlugins/group/auto/KubeletFlags 0.28
287 TestNetworkPlugins/group/auto/NetCatPod 10.29
288 TestNetworkPlugins/group/auto/DNS 0.18
289 TestNetworkPlugins/group/auto/Localhost 0.16
290 TestNetworkPlugins/group/auto/HairPin 0.18
291 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
292 TestNetworkPlugins/group/custom-flannel/Start 89.17
293 TestPause/serial/Pause 0.87
294 TestPause/serial/VerifyStatus 0.29
295 TestPause/serial/Unpause 0.82
296 TestPause/serial/PauseAgain 1.07
297 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
298 TestNetworkPlugins/group/kindnet/NetCatPod 9.32
299 TestPause/serial/DeletePaused 1.1
300 TestPause/serial/VerifyDeletedResources 0.52
301 TestNetworkPlugins/group/enable-default-cni/Start 87.08
302 TestNetworkPlugins/group/kindnet/DNS 0.2
303 TestNetworkPlugins/group/kindnet/Localhost 0.16
304 TestNetworkPlugins/group/kindnet/HairPin 0.16
305 TestNetworkPlugins/group/flannel/Start 117.84
306 TestNetworkPlugins/group/calico/ControllerPod 5.09
307 TestNetworkPlugins/group/calico/KubeletFlags 0.25
308 TestNetworkPlugins/group/calico/NetCatPod 10.4
309 TestNetworkPlugins/group/calico/DNS 0.19
310 TestNetworkPlugins/group/calico/Localhost 0.16
311 TestNetworkPlugins/group/calico/HairPin 0.17
312 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.7
313 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.96
314 TestNetworkPlugins/group/bridge/Start 108.25
315 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
316 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.37
317 TestNetworkPlugins/group/custom-flannel/DNS 0.19
318 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
319 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
320 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
321 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
322 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
324 TestStartStop/group/old-k8s-version/serial/FirstStart 182.41
326 TestStartStop/group/no-preload/serial/FirstStart 141.87
327 TestNetworkPlugins/group/flannel/ControllerPod 6.01
328 TestNetworkPlugins/group/flannel/KubeletFlags 0.38
329 TestNetworkPlugins/group/flannel/NetCatPod 9.36
330 TestNetworkPlugins/group/flannel/DNS 0.16
331 TestNetworkPlugins/group/flannel/Localhost 0.12
332 TestNetworkPlugins/group/flannel/HairPin 0.14
334 TestStartStop/group/embed-certs/serial/FirstStart 77.06
335 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
336 TestNetworkPlugins/group/bridge/NetCatPod 9.27
337 TestNetworkPlugins/group/bridge/DNS 0.17
338 TestNetworkPlugins/group/bridge/Localhost 0.14
339 TestNetworkPlugins/group/bridge/HairPin 0.12
341 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 104.07
342 TestStartStop/group/embed-certs/serial/DeployApp 12.93
343 TestStartStop/group/no-preload/serial/DeployApp 10.35
344 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.24
345 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.23
346 TestStartStop/group/no-preload/serial/Stop 92.57
347 TestStartStop/group/embed-certs/serial/Stop 92.51
348 TestStartStop/group/old-k8s-version/serial/DeployApp 12.47
349 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.93
350 TestStartStop/group/old-k8s-version/serial/Stop 92.46
351 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.28
352 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.03
353 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.91
354 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
355 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
356 TestStartStop/group/no-preload/serial/SecondStart 321.74
357 TestStartStop/group/embed-certs/serial/SecondStart 343.38
358 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
359 TestStartStop/group/old-k8s-version/serial/SecondStart 195.2
360 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.27
361 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 317.52
362 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
363 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
364 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
365 TestStartStop/group/old-k8s-version/serial/Pause 2.81
367 TestStartStop/group/newest-cni/serial/FirstStart 61.29
368 TestStartStop/group/newest-cni/serial/DeployApp 0
369 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.2
370 TestStartStop/group/newest-cni/serial/Stop 2.56
371 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
372 TestStartStop/group/newest-cni/serial/SecondStart 36.91
373 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
374 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
375 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
376 TestStartStop/group/no-preload/serial/Pause 2.95
377 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 18.01
378 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
379 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
380 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
381 TestStartStop/group/newest-cni/serial/Pause 3.15
382 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
383 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
384 TestStartStop/group/embed-certs/serial/Pause 2.93
385 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 14.01
386 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
387 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
388 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.69
x
+
TestDownloadOnly/v1.20.0/json-events (69.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-324176 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-324176 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (1m9.205873135s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (69.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-324176
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-324176: exit status 85 (68.802361ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-324176 | jenkins | v1.33.0 | 03 May 24 21:29 UTC |          |
	|         | -p download-only-324176        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/03 21:29:38
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0503 21:29:38.770077   13390 out.go:291] Setting OutFile to fd 1 ...
	I0503 21:29:38.770342   13390 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 21:29:38.770353   13390 out.go:304] Setting ErrFile to fd 2...
	I0503 21:29:38.770360   13390 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 21:29:38.770568   13390 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18793-6010/.minikube/bin
	W0503 21:29:38.770705   13390 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18793-6010/.minikube/config/config.json: open /home/jenkins/minikube-integration/18793-6010/.minikube/config/config.json: no such file or directory
	I0503 21:29:38.771287   13390 out.go:298] Setting JSON to true
	I0503 21:29:38.772143   13390 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":720,"bootTime":1714771059,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0503 21:29:38.772211   13390 start.go:139] virtualization: kvm guest
	I0503 21:29:38.775144   13390 out.go:97] [download-only-324176] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0503 21:29:38.776888   13390 out.go:169] MINIKUBE_LOCATION=18793
	W0503 21:29:38.775255   13390 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18793-6010/.minikube/cache/preloaded-tarball: no such file or directory
	I0503 21:29:38.775302   13390 notify.go:220] Checking for updates...
	I0503 21:29:38.779917   13390 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 21:29:38.781622   13390 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18793-6010/kubeconfig
	I0503 21:29:38.783215   13390 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18793-6010/.minikube
	I0503 21:29:38.784800   13390 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0503 21:29:38.787598   13390 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0503 21:29:38.787863   13390 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 21:29:38.889860   13390 out.go:97] Using the kvm2 driver based on user configuration
	I0503 21:29:38.889891   13390 start.go:297] selected driver: kvm2
	I0503 21:29:38.889897   13390 start.go:901] validating driver "kvm2" against <nil>
	I0503 21:29:38.890231   13390 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 21:29:38.890348   13390 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18793-6010/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0503 21:29:38.904912   13390 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0503 21:29:38.904985   13390 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0503 21:29:38.905461   13390 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0503 21:29:38.905610   13390 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0503 21:29:38.905665   13390 cni.go:84] Creating CNI manager for ""
	I0503 21:29:38.905681   13390 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0503 21:29:38.905688   13390 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0503 21:29:38.905742   13390 start.go:340] cluster config:
	{Name:download-only-324176 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-324176 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 21:29:38.905904   13390 iso.go:125] acquiring lock: {Name:mkac3cf29445902eddb693be62f8a45d3ca86578 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 21:29:38.907991   13390 out.go:97] Downloading VM boot image ...
	I0503 21:29:38.908024   13390 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18793-6010/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso
	I0503 21:29:51.854968   13390 out.go:97] Starting "download-only-324176" primary control-plane node in "download-only-324176" cluster
	I0503 21:29:51.855000   13390 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0503 21:29:52.006327   13390 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0503 21:29:52.006358   13390 cache.go:56] Caching tarball of preloaded images
	I0503 21:29:52.006527   13390 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0503 21:29:52.008641   13390 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0503 21:29:52.008668   13390 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0503 21:29:52.165579   13390 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/18793-6010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0503 21:30:13.647318   13390 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0503 21:30:13.647411   13390 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18793-6010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0503 21:30:14.548288   13390 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0503 21:30:14.548620   13390 profile.go:143] Saving config to /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/download-only-324176/config.json ...
	I0503 21:30:14.548650   13390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/download-only-324176/config.json: {Name:mk7fabd05acc29b04eaf8f1c53611c01c8ffbf7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 21:30:14.548797   13390 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0503 21:30:14.548958   13390 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18793-6010/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-324176 host does not exist
	  To start a cluster, run: "minikube start -p download-only-324176"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-324176
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (21.68s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-360729 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-360729 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (21.680770711s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (21.68s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-360729
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-360729: exit status 85 (70.569524ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-324176 | jenkins | v1.33.0 | 03 May 24 21:29 UTC |                     |
	|         | -p download-only-324176        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0 | 03 May 24 21:30 UTC | 03 May 24 21:30 UTC |
	| delete  | -p download-only-324176        | download-only-324176 | jenkins | v1.33.0 | 03 May 24 21:30 UTC | 03 May 24 21:30 UTC |
	| start   | -o=json --download-only        | download-only-360729 | jenkins | v1.33.0 | 03 May 24 21:30 UTC |                     |
	|         | -p download-only-360729        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/03 21:30:48
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0503 21:30:48.308042   13769 out.go:291] Setting OutFile to fd 1 ...
	I0503 21:30:48.308153   13769 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 21:30:48.308162   13769 out.go:304] Setting ErrFile to fd 2...
	I0503 21:30:48.308166   13769 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 21:30:48.308370   13769 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18793-6010/.minikube/bin
	I0503 21:30:48.308888   13769 out.go:298] Setting JSON to true
	I0503 21:30:48.309715   13769 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":789,"bootTime":1714771059,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0503 21:30:48.309773   13769 start.go:139] virtualization: kvm guest
	I0503 21:30:48.311764   13769 out.go:97] [download-only-360729] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0503 21:30:48.313432   13769 out.go:169] MINIKUBE_LOCATION=18793
	I0503 21:30:48.311936   13769 notify.go:220] Checking for updates...
	I0503 21:30:48.316189   13769 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 21:30:48.317591   13769 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18793-6010/kubeconfig
	I0503 21:30:48.318970   13769 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18793-6010/.minikube
	I0503 21:30:48.320313   13769 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0503 21:30:48.322659   13769 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0503 21:30:48.322892   13769 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 21:30:48.355176   13769 out.go:97] Using the kvm2 driver based on user configuration
	I0503 21:30:48.355215   13769 start.go:297] selected driver: kvm2
	I0503 21:30:48.355225   13769 start.go:901] validating driver "kvm2" against <nil>
	I0503 21:30:48.355584   13769 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 21:30:48.355695   13769 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18793-6010/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0503 21:30:48.371425   13769 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0503 21:30:48.371476   13769 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0503 21:30:48.372140   13769 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0503 21:30:48.372327   13769 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0503 21:30:48.372402   13769 cni.go:84] Creating CNI manager for ""
	I0503 21:30:48.372421   13769 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0503 21:30:48.372436   13769 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0503 21:30:48.372514   13769 start.go:340] cluster config:
	{Name:download-only-360729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:download-only-360729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 21:30:48.372622   13769 iso.go:125] acquiring lock: {Name:mkac3cf29445902eddb693be62f8a45d3ca86578 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 21:30:48.374221   13769 out.go:97] Starting "download-only-360729" primary control-plane node in "download-only-360729" cluster
	I0503 21:30:48.374258   13769 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime containerd
	I0503 21:30:48.529103   13769 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-containerd-overlay2-amd64.tar.lz4
	I0503 21:30:48.529148   13769 cache.go:56] Caching tarball of preloaded images
	I0503 21:30:48.529319   13769 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime containerd
	I0503 21:30:48.531362   13769 out.go:97] Downloading Kubernetes v1.30.0 preload ...
	I0503 21:30:48.531401   13769 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-containerd-overlay2-amd64.tar.lz4 ...
	I0503 21:30:48.685690   13769 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:3a7aac5052a5448f24921f55001543e6 -> /home/jenkins/minikube-integration/18793-6010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-containerd-overlay2-amd64.tar.lz4
	I0503 21:31:08.295218   13769 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-containerd-overlay2-amd64.tar.lz4 ...
	I0503 21:31:08.295311   13769 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18793-6010/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-containerd-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-360729 host does not exist
	  To start a cluster, run: "minikube start -p download-only-360729"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-360729
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-827678 --alsologtostderr --binary-mirror http://127.0.0.1:42999 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-827678" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-827678
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (89.67s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-985134 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-985134 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m28.635910483s)
helpers_test.go:175: Cleaning up "offline-containerd-985134" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-985134
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-985134: (1.037547542s)
--- PASS: TestOffline (89.67s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-146858
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-146858: exit status 85 (58.94032ms)

                                                
                                                
-- stdout --
	* Profile "addons-146858" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-146858"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-146858
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-146858: exit status 85 (59.764761ms)

                                                
                                                
-- stdout --
	* Profile "addons-146858" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-146858"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (214.44s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-146858 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-146858 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m34.441099875s)
--- PASS: TestAddons/Setup (214.44s)

                                                
                                    
x
+
TestAddons/parallel/Registry (23.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 22.751086ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-ngbjf" [64997a94-617f-469b-9123-d13774652b03] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00524461s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-lmkbf" [f7405a25-3cc5-4e99-a9b2-e79b705f75a9] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005651095s
addons_test.go:340: (dbg) Run:  kubectl --context addons-146858 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-146858 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-146858 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (11.991344856s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-146858 ip
2024/05/03 21:35:08 [DEBUG] GET http://192.168.39.58:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-146858 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (23.89s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (22.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-146858 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-146858 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-146858 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d2298fe1-06ca-4ee7-9eb8-2d1626342cff] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d2298fe1-06ca-4ee7-9eb8-2d1626342cff] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.005094059s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-146858 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-146858 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-146858 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.58
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-146858 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-146858 addons disable ingress-dns --alsologtostderr -v=1: (1.084448543s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-146858 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-146858 addons disable ingress --alsologtostderr -v=1: (7.8325768s)
--- PASS: TestAddons/parallel/Ingress (22.28s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.87s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-6gt5q" [5655bb8b-caf2-45b0-be87-0b9c1b8e6ec7] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004231159s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-146858
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-146858: (5.869482667s)
--- PASS: TestAddons/parallel/InspektorGadget (11.87s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.08s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 34.494453ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-sntqz" [10c0ce9e-46a5-44f6-b11e-c88357ae3a30] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005113553s
addons_test.go:415: (dbg) Run:  kubectl --context addons-146858 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-146858 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (7.08s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (14.54s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 2.280296ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-6tlnr" [6659d916-b68d-4378-84d0-76c5fd93ba89] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.013463799s
addons_test.go:473: (dbg) Run:  kubectl --context addons-146858 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-146858 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (8.65082788s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-146858 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (14.54s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.36s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 35.999547ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-146858 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146858 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146858 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146858 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146858 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146858 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146858 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146858 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146858 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146858 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146858 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146858 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-146858 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [b1f61397-9820-46b6-8669-d79d5af9dd9d] Pending
helpers_test.go:344: "task-pv-pod" [b1f61397-9820-46b6-8669-d79d5af9dd9d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [b1f61397-9820-46b6-8669-d79d5af9dd9d] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.007159043s
addons_test.go:584: (dbg) Run:  kubectl --context addons-146858 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-146858 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-146858 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-146858 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-146858 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-146858 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146858 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146858 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-146858 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [4bd8c17a-3902-4e6c-865f-9f1adb8864af] Pending
helpers_test.go:344: "task-pv-pod-restore" [4bd8c17a-3902-4e6c-865f-9f1adb8864af] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [4bd8c17a-3902-4e6c-865f-9f1adb8864af] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.004632757s
addons_test.go:626: (dbg) Run:  kubectl --context addons-146858 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-146858 delete pod task-pv-pod-restore: (1.341049231s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-146858 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-146858 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-146858 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-146858 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.889000538s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-146858 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-linux-amd64 -p addons-146858 addons disable volumesnapshots --alsologtostderr -v=1: (1.062271638s)
--- PASS: TestAddons/parallel/CSI (46.36s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.78s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6dc8d859f6-52lb5" [2901079c-a832-497b-b986-f254253948ec] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005998153s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-146858
--- PASS: TestAddons/parallel/CloudSpanner (5.78s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (65.75s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-146858 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-146858 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146858 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146858 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146858 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146858 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146858 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146858 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146858 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146858 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-146858 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [9f519b90-209f-4356-b627-e6eb3cf8c941] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [9f519b90-209f-4356-b627-e6eb3cf8c941] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [9f519b90-209f-4356-b627-e6eb3cf8c941] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 13.003702228s
addons_test.go:891: (dbg) Run:  kubectl --context addons-146858 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-146858 ssh "cat /opt/local-path-provisioner/pvc-3b189e13-be8e-4d16-acdc-6b0df89705a6_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-146858 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-146858 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-146858 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-146858 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.855267356s)
--- PASS: TestAddons/parallel/LocalPath (65.75s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-mwfx8" [927e681f-9b2a-492e-9276-e9b8f9d5e724] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.006326563s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-146858
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-xbw8h" [5f83e62c-f536-4125-96ba-43ce3e669288] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005323145s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-146858 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-146858 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.74s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-146858
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-146858: (1m32.44066622s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-146858
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-146858
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-146858
--- PASS: TestAddons/StoppedEnableDisable (92.74s)

                                                
                                    
x
+
TestCertOptions (62.62s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-747044 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-747044 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m1.307008958s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-747044 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-747044 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-747044 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-747044" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-747044
--- PASS: TestCertOptions (62.62s)

                                                
                                    
x
+
TestCertExpiration (275.2s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-996621 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-996621 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (44.731046279s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-996621 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-996621 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (49.632094059s)
helpers_test.go:175: Cleaning up "cert-expiration-996621" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-996621
--- PASS: TestCertExpiration (275.20s)

                                                
                                    
x
+
TestForceSystemdFlag (56.31s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-312767 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-312767 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (55.11849392s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-312767 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-312767" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-312767
--- PASS: TestForceSystemdFlag (56.31s)

                                                
                                    
x
+
TestForceSystemdEnv (98.61s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-980573 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-980573 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m37.401381521s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-980573 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-980573" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-980573
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-980573: (1.011045305s)
--- PASS: TestForceSystemdEnv (98.61s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (9.57s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
E0503 22:36:48.418414   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/functional-515062/client.crt: no such file or directory
--- PASS: TestKVMDriverInstallOrUpdate (9.57s)

                                                
                                    
x
+
TestErrorSpam/setup (45.17s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-612353 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-612353 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-612353 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-612353 --driver=kvm2  --container-runtime=containerd: (45.165929511s)
--- PASS: TestErrorSpam/setup (45.17s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612353 --log_dir /tmp/nospam-612353 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612353 --log_dir /tmp/nospam-612353 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612353 --log_dir /tmp/nospam-612353 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612353 --log_dir /tmp/nospam-612353 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612353 --log_dir /tmp/nospam-612353 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612353 --log_dir /tmp/nospam-612353 status
--- PASS: TestErrorSpam/status (0.78s)

                                                
                                    
x
+
TestErrorSpam/pause (1.61s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612353 --log_dir /tmp/nospam-612353 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612353 --log_dir /tmp/nospam-612353 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612353 --log_dir /tmp/nospam-612353 pause
--- PASS: TestErrorSpam/pause (1.61s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.66s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612353 --log_dir /tmp/nospam-612353 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612353 --log_dir /tmp/nospam-612353 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612353 --log_dir /tmp/nospam-612353 unpause
--- PASS: TestErrorSpam/unpause (1.66s)

                                                
                                    
x
+
TestErrorSpam/stop (4.48s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612353 --log_dir /tmp/nospam-612353 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-612353 --log_dir /tmp/nospam-612353 stop: (1.492300776s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612353 --log_dir /tmp/nospam-612353 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-612353 --log_dir /tmp/nospam-612353 stop: (1.511588365s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612353 --log_dir /tmp/nospam-612353 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-612353 --log_dir /tmp/nospam-612353 stop: (1.477367462s)
--- PASS: TestErrorSpam/stop (4.48s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18793-6010/.minikube/files/etc/test/nested/copy/13378/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (100.03s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-515062 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E0503 21:39:45.608073   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/client.crt: no such file or directory
E0503 21:39:45.613783   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/client.crt: no such file or directory
E0503 21:39:45.624001   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/client.crt: no such file or directory
E0503 21:39:45.644237   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/client.crt: no such file or directory
E0503 21:39:45.684495   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/client.crt: no such file or directory
E0503 21:39:45.764828   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/client.crt: no such file or directory
E0503 21:39:45.925259   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/client.crt: no such file or directory
E0503 21:39:46.245876   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/client.crt: no such file or directory
E0503 21:39:46.886801   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/client.crt: no such file or directory
E0503 21:39:48.167461   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/client.crt: no such file or directory
E0503 21:39:50.729345   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/client.crt: no such file or directory
E0503 21:39:55.849881   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/client.crt: no such file or directory
E0503 21:40:06.090728   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-515062 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m40.030887624s)
--- PASS: TestFunctional/serial/StartWithProxy (100.03s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.34s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-515062 --alsologtostderr -v=8
E0503 21:40:26.571633   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-515062 --alsologtostderr -v=8: (40.336466914s)
functional_test.go:659: soft start took 40.337086249s for "functional-515062" cluster.
--- PASS: TestFunctional/serial/SoftStart (40.34s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-515062 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.97s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-515062 cache add registry.k8s.io/pause:3.1: (1.313963965s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-515062 cache add registry.k8s.io/pause:3.3: (1.392169956s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 cache add registry.k8s.io/pause:latest
E0503 21:41:07.532288   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/client.crt: no such file or directory
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-515062 cache add registry.k8s.io/pause:latest: (1.265761626s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.97s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (3.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-515062 /tmp/TestFunctionalserialCacheCmdcacheadd_local2009985637/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 cache add minikube-local-cache-test:functional-515062
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-515062 cache add minikube-local-cache-test:functional-515062: (2.669338271s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 cache delete minikube-local-cache-test:functional-515062
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-515062
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (3.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.87s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-515062 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (221.983547ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-515062 cache reload: (1.194249611s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.87s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 kubectl -- --context functional-515062 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-515062 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.78s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-515062 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-515062 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.784029324s)
functional_test.go:757: restart took 43.784135367s for "functional-515062" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (43.78s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-515062 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-515062 logs: (1.462543007s)
--- PASS: TestFunctional/serial/LogsCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 logs --file /tmp/TestFunctionalserialLogsFileCmd349395582/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-515062 logs --file /tmp/TestFunctionalserialLogsFileCmd349395582/001/logs.txt: (1.517464332s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.87s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-515062 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-515062
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-515062: exit status 115 (283.590594ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.229:31510 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-515062 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.87s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-515062 config get cpus: exit status 14 (63.752743ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-515062 config get cpus: exit status 14 (60.861233ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-515062 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-515062 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 21830: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.81s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-515062 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-515062 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (138.063293ms)

                                                
                                                
-- stdout --
	* [functional-515062] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18793
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18793-6010/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18793-6010/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 21:42:34.221942   21538 out.go:291] Setting OutFile to fd 1 ...
	I0503 21:42:34.222056   21538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 21:42:34.222065   21538 out.go:304] Setting ErrFile to fd 2...
	I0503 21:42:34.222069   21538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 21:42:34.222801   21538 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18793-6010/.minikube/bin
	I0503 21:42:34.223673   21538 out.go:298] Setting JSON to false
	I0503 21:42:34.224592   21538 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1495,"bootTime":1714771059,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0503 21:42:34.224650   21538 start.go:139] virtualization: kvm guest
	I0503 21:42:34.226760   21538 out.go:177] * [functional-515062] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0503 21:42:34.228609   21538 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 21:42:34.228537   21538 notify.go:220] Checking for updates...
	I0503 21:42:34.230048   21538 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 21:42:34.231588   21538 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18793-6010/kubeconfig
	I0503 21:42:34.232973   21538 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18793-6010/.minikube
	I0503 21:42:34.234392   21538 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0503 21:42:34.235814   21538 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 21:42:34.237441   21538 config.go:182] Loaded profile config "functional-515062": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0503 21:42:34.237951   21538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:42:34.237998   21538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:42:34.252655   21538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37529
	I0503 21:42:34.253040   21538 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:42:34.253548   21538 main.go:141] libmachine: Using API Version  1
	I0503 21:42:34.253567   21538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:42:34.253860   21538 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:42:34.254061   21538 main.go:141] libmachine: (functional-515062) Calling .DriverName
	I0503 21:42:34.254295   21538 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 21:42:34.254561   21538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:42:34.254592   21538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:42:34.269095   21538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35081
	I0503 21:42:34.269505   21538 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:42:34.270000   21538 main.go:141] libmachine: Using API Version  1
	I0503 21:42:34.270020   21538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:42:34.270311   21538 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:42:34.270486   21538 main.go:141] libmachine: (functional-515062) Calling .DriverName
	I0503 21:42:34.301374   21538 out.go:177] * Using the kvm2 driver based on existing profile
	I0503 21:42:34.302736   21538 start.go:297] selected driver: kvm2
	I0503 21:42:34.302754   21538 start.go:901] validating driver "kvm2" against &{Name:functional-515062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:functional-515062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.229 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 21:42:34.302865   21538 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 21:42:34.305209   21538 out.go:177] 
	W0503 21:42:34.306525   21538 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0503 21:42:34.307675   21538 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-515062 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-515062 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-515062 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (142.379523ms)

                                                
                                                
-- stdout --
	* [functional-515062] minikube v1.33.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18793
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18793-6010/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18793-6010/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 21:42:35.320754   21689 out.go:291] Setting OutFile to fd 1 ...
	I0503 21:42:35.320856   21689 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 21:42:35.320863   21689 out.go:304] Setting ErrFile to fd 2...
	I0503 21:42:35.320869   21689 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 21:42:35.321149   21689 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18793-6010/.minikube/bin
	I0503 21:42:35.321735   21689 out.go:298] Setting JSON to false
	I0503 21:42:35.322658   21689 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1496,"bootTime":1714771059,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0503 21:42:35.322722   21689 start.go:139] virtualization: kvm guest
	I0503 21:42:35.324923   21689 out.go:177] * [functional-515062] minikube v1.33.0 sur Ubuntu 20.04 (kvm/amd64)
	I0503 21:42:35.326369   21689 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 21:42:35.326355   21689 notify.go:220] Checking for updates...
	I0503 21:42:35.329114   21689 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 21:42:35.330348   21689 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18793-6010/kubeconfig
	I0503 21:42:35.331532   21689 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18793-6010/.minikube
	I0503 21:42:35.332628   21689 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0503 21:42:35.333877   21689 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 21:42:35.335514   21689 config.go:182] Loaded profile config "functional-515062": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0503 21:42:35.335930   21689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:42:35.336016   21689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:42:35.351238   21689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45241
	I0503 21:42:35.351639   21689 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:42:35.352138   21689 main.go:141] libmachine: Using API Version  1
	I0503 21:42:35.352153   21689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:42:35.352435   21689 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:42:35.352620   21689 main.go:141] libmachine: (functional-515062) Calling .DriverName
	I0503 21:42:35.352846   21689 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 21:42:35.353123   21689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:42:35.353154   21689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:42:35.367160   21689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36929
	I0503 21:42:35.367505   21689 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:42:35.368019   21689 main.go:141] libmachine: Using API Version  1
	I0503 21:42:35.368051   21689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:42:35.368434   21689 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:42:35.368628   21689 main.go:141] libmachine: (functional-515062) Calling .DriverName
	I0503 21:42:35.400825   21689 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0503 21:42:35.402422   21689 start.go:297] selected driver: kvm2
	I0503 21:42:35.402439   21689 start.go:901] validating driver "kvm2" against &{Name:functional-515062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:functional-515062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.229 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 21:42:35.402563   21689 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 21:42:35.404880   21689 out.go:177] 
	W0503 21:42:35.406108   21689 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0503 21:42:35.407407   21689 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (21.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-515062 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-515062 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-zb2pq" [f7e088fb-7ce7-4d72-a8eb-1b21a74f2f71] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-zb2pq" [f7e088fb-7ce7-4d72-a8eb-1b21a74f2f71] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 21.005328424s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.50.229:30556
functional_test.go:1671: http://192.168.50.229:30556: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-zb2pq

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.229:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.229:30556
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (21.51s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (50.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [ecf071b9-be49-44e5-bbdd-47182a409bd7] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005101675s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-515062 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-515062 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-515062 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-515062 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-515062 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d67032d7-d45c-429a-9630-e602e3310f81] Pending
helpers_test.go:344: "sp-pod" [d67032d7-d45c-429a-9630-e602e3310f81] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d67032d7-d45c-429a-9630-e602e3310f81] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 24.006833774s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-515062 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-515062 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-515062 delete -f testdata/storage-provisioner/pod.yaml: (1.076380095s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-515062 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7f597cdf-48e8-47ed-b309-c501dc9c0b46] Pending
helpers_test.go:344: "sp-pod" [7f597cdf-48e8-47ed-b309-c501dc9c0b46] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [7f597cdf-48e8-47ed-b309-c501dc9c0b46] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.004209825s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-515062 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (50.94s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 ssh -n functional-515062 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 cp functional-515062:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2088023045/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 ssh -n functional-515062 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 ssh -n functional-515062 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-515062 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-8f9cx" [08d31454-2ac6-4780-b835-ff48210a0a5a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-8f9cx" [08d31454-2ac6-4780-b835-ff48210a0a5a] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.007073588s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-515062 exec mysql-64454c8b5c-8f9cx -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-515062 exec mysql-64454c8b5c-8f9cx -- mysql -ppassword -e "show databases;": exit status 1 (156.379455ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-515062 exec mysql-64454c8b5c-8f9cx -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-515062 exec mysql-64454c8b5c-8f9cx -- mysql -ppassword -e "show databases;": exit status 1 (221.391863ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-515062 exec mysql-64454c8b5c-8f9cx -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-515062 exec mysql-64454c8b5c-8f9cx -- mysql -ppassword -e "show databases;": exit status 1 (247.600777ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-515062 exec mysql-64454c8b5c-8f9cx -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/13378/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 ssh "sudo cat /etc/test/nested/copy/13378/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/13378.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 ssh "sudo cat /etc/ssl/certs/13378.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/13378.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 ssh "sudo cat /usr/share/ca-certificates/13378.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/133782.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 ssh "sudo cat /etc/ssl/certs/133782.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/133782.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 ssh "sudo cat /usr/share/ca-certificates/133782.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-515062 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-515062 ssh "sudo systemctl is-active docker": exit status 1 (212.034361ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-515062 ssh "sudo systemctl is-active crio": exit status 1 (276.501103ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (23.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-515062 /tmp/TestFunctionalparallelMountCmdany-port2003851910/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1714772525522547147" to /tmp/TestFunctionalparallelMountCmdany-port2003851910/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1714772525522547147" to /tmp/TestFunctionalparallelMountCmdany-port2003851910/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1714772525522547147" to /tmp/TestFunctionalparallelMountCmdany-port2003851910/001/test-1714772525522547147
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-515062 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (268.39399ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 May  3 21:42 created-by-test
-rw-r--r-- 1 docker docker 24 May  3 21:42 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 May  3 21:42 test-1714772525522547147
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 ssh cat /mount-9p/test-1714772525522547147
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-515062 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [0237bc09-e7f1-4fe6-8c5a-48a4fd1fafcf] Pending
helpers_test.go:344: "busybox-mount" [0237bc09-e7f1-4fe6-8c5a-48a4fd1fafcf] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [0237bc09-e7f1-4fe6-8c5a-48a4fd1fafcf] Running
helpers_test.go:344: "busybox-mount" [0237bc09-e7f1-4fe6-8c5a-48a4fd1fafcf] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [0237bc09-e7f1-4fe6-8c5a-48a4fd1fafcf] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 21.00556888s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-515062 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-515062 /tmp/TestFunctionalparallelMountCmdany-port2003851910/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (23.79s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-515062 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-515062 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-q9485" [985933c9-6013-4ad7-be68-3d67facbb73c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-q9485" [985933c9-6013-4ad7-be68-3d67facbb73c] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.004181138s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.25s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-515062 /tmp/TestFunctionalparallelMountCmdspecific-port187206708/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 ssh "findmnt -T /mount-9p | grep 9p"
E0503 21:42:29.452767   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/client.crt: no such file or directory
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-515062 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (250.95541ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-515062 /tmp/TestFunctionalparallelMountCmdspecific-port187206708/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-515062 ssh "sudo umount -f /mount-9p": exit status 1 (216.184379ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-515062 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-515062 /tmp/TestFunctionalparallelMountCmdspecific-port187206708/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-515062 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4138905914/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-515062 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4138905914/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-515062 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4138905914/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-515062 ssh "findmnt -T" /mount1: exit status 1 (283.220675ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-515062 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-515062 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4138905914/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-515062 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4138905914/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-515062 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4138905914/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "326.709253ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "59.875333ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-515062 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-515062
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-515062
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-515062 image ls --format short --alsologtostderr:
I0503 21:42:56.188033   22458 out.go:291] Setting OutFile to fd 1 ...
I0503 21:42:56.188114   22458 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0503 21:42:56.188118   22458 out.go:304] Setting ErrFile to fd 2...
I0503 21:42:56.188122   22458 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0503 21:42:56.188286   22458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18793-6010/.minikube/bin
I0503 21:42:56.188837   22458 config.go:182] Loaded profile config "functional-515062": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0503 21:42:56.188942   22458 config.go:182] Loaded profile config "functional-515062": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0503 21:42:56.189305   22458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0503 21:42:56.189349   22458 main.go:141] libmachine: Launching plugin server for driver kvm2
I0503 21:42:56.203487   22458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33119
I0503 21:42:56.203907   22458 main.go:141] libmachine: () Calling .GetVersion
I0503 21:42:56.205057   22458 main.go:141] libmachine: Using API Version  1
I0503 21:42:56.205083   22458 main.go:141] libmachine: () Calling .SetConfigRaw
I0503 21:42:56.205470   22458 main.go:141] libmachine: () Calling .GetMachineName
I0503 21:42:56.206383   22458 main.go:141] libmachine: (functional-515062) Calling .GetState
I0503 21:42:56.208261   22458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0503 21:42:56.208302   22458 main.go:141] libmachine: Launching plugin server for driver kvm2
I0503 21:42:56.222449   22458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45001
I0503 21:42:56.222788   22458 main.go:141] libmachine: () Calling .GetVersion
I0503 21:42:56.223281   22458 main.go:141] libmachine: Using API Version  1
I0503 21:42:56.223303   22458 main.go:141] libmachine: () Calling .SetConfigRaw
I0503 21:42:56.223565   22458 main.go:141] libmachine: () Calling .GetMachineName
I0503 21:42:56.223772   22458 main.go:141] libmachine: (functional-515062) Calling .DriverName
I0503 21:42:56.223916   22458 ssh_runner.go:195] Run: systemctl --version
I0503 21:42:56.223932   22458 main.go:141] libmachine: (functional-515062) Calling .GetSSHHostname
I0503 21:42:56.225951   22458 main.go:141] libmachine: (functional-515062) DBG | domain functional-515062 has defined MAC address 52:54:00:72:0a:b9 in network mk-functional-515062
I0503 21:42:56.226239   22458 main.go:141] libmachine: (functional-515062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:0a:b9", ip: ""} in network mk-functional-515062: {Iface:virbr1 ExpiryTime:2024-05-03 22:39:00 +0000 UTC Type:0 Mac:52:54:00:72:0a:b9 Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:functional-515062 Clientid:01:52:54:00:72:0a:b9}
I0503 21:42:56.226268   22458 main.go:141] libmachine: (functional-515062) DBG | domain functional-515062 has defined IP address 192.168.50.229 and MAC address 52:54:00:72:0a:b9 in network mk-functional-515062
I0503 21:42:56.226338   22458 main.go:141] libmachine: (functional-515062) Calling .GetSSHPort
I0503 21:42:56.226492   22458 main.go:141] libmachine: (functional-515062) Calling .GetSSHKeyPath
I0503 21:42:56.226653   22458 main.go:141] libmachine: (functional-515062) Calling .GetSSHUsername
I0503 21:42:56.226779   22458 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18793-6010/.minikube/machines/functional-515062/id_rsa Username:docker}
I0503 21:42:56.313749   22458 ssh_runner.go:195] Run: sudo crictl images --output json
I0503 21:42:56.384705   22458 main.go:141] libmachine: Making call to close driver server
I0503 21:42:56.384719   22458 main.go:141] libmachine: (functional-515062) Calling .Close
I0503 21:42:56.385076   22458 main.go:141] libmachine: Successfully made call to close driver server
I0503 21:42:56.385095   22458 main.go:141] libmachine: Making call to close connection to plugin binary
I0503 21:42:56.385104   22458 main.go:141] libmachine: Making call to close driver server
I0503 21:42:56.385123   22458 main.go:141] libmachine: (functional-515062) Calling .Close
I0503 21:42:56.385330   22458 main.go:141] libmachine: Successfully made call to close driver server
I0503 21:42:56.385343   22458 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-515062 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-515062  | sha256:807db6 | 992B   |
| gcr.io/google-containers/addon-resizer      | functional-515062  | sha256:ffd4cf | 10.8MB |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:cbb01a | 18.2MB |
| registry.k8s.io/etcd                        | 3.5.12-0           | sha256:3861cf | 57.2MB |
| registry.k8s.io/kube-apiserver              | v1.30.0            | sha256:c42f13 | 32.7MB |
| registry.k8s.io/kube-proxy                  | v1.30.0            | sha256:a0bf55 | 29MB   |
| docker.io/library/mysql                     | 5.7                | sha256:510733 | 138MB  |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| docker.io/kindest/kindnetd                  | v20240202-8f1494ea | sha256:4950bb | 27.8MB |
| docker.io/library/nginx                     | latest             | sha256:7383c2 | 71MB   |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/kube-controller-manager     | v1.30.0            | sha256:c7aad4 | 31MB   |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/kube-scheduler              | v1.30.0            | sha256:259c82 | 19.2MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-515062 image ls --format table --alsologtostderr:
I0503 21:42:56.456521   22548 out.go:291] Setting OutFile to fd 1 ...
I0503 21:42:56.456774   22548 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0503 21:42:56.456789   22548 out.go:304] Setting ErrFile to fd 2...
I0503 21:42:56.456794   22548 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0503 21:42:56.457664   22548 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18793-6010/.minikube/bin
I0503 21:42:56.458460   22548 config.go:182] Loaded profile config "functional-515062": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0503 21:42:56.458552   22548 config.go:182] Loaded profile config "functional-515062": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0503 21:42:56.458990   22548 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0503 21:42:56.459026   22548 main.go:141] libmachine: Launching plugin server for driver kvm2
I0503 21:42:56.475371   22548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40725
I0503 21:42:56.475778   22548 main.go:141] libmachine: () Calling .GetVersion
I0503 21:42:56.476227   22548 main.go:141] libmachine: Using API Version  1
I0503 21:42:56.476244   22548 main.go:141] libmachine: () Calling .SetConfigRaw
I0503 21:42:56.476514   22548 main.go:141] libmachine: () Calling .GetMachineName
I0503 21:42:56.476633   22548 main.go:141] libmachine: (functional-515062) Calling .GetState
I0503 21:42:56.478351   22548 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0503 21:42:56.478399   22548 main.go:141] libmachine: Launching plugin server for driver kvm2
I0503 21:42:56.492379   22548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36961
I0503 21:42:56.492726   22548 main.go:141] libmachine: () Calling .GetVersion
I0503 21:42:56.493130   22548 main.go:141] libmachine: Using API Version  1
I0503 21:42:56.493155   22548 main.go:141] libmachine: () Calling .SetConfigRaw
I0503 21:42:56.493530   22548 main.go:141] libmachine: () Calling .GetMachineName
I0503 21:42:56.493727   22548 main.go:141] libmachine: (functional-515062) Calling .DriverName
I0503 21:42:56.493936   22548 ssh_runner.go:195] Run: systemctl --version
I0503 21:42:56.493964   22548 main.go:141] libmachine: (functional-515062) Calling .GetSSHHostname
I0503 21:42:56.496227   22548 main.go:141] libmachine: (functional-515062) DBG | domain functional-515062 has defined MAC address 52:54:00:72:0a:b9 in network mk-functional-515062
I0503 21:42:56.496574   22548 main.go:141] libmachine: (functional-515062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:0a:b9", ip: ""} in network mk-functional-515062: {Iface:virbr1 ExpiryTime:2024-05-03 22:39:00 +0000 UTC Type:0 Mac:52:54:00:72:0a:b9 Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:functional-515062 Clientid:01:52:54:00:72:0a:b9}
I0503 21:42:56.496604   22548 main.go:141] libmachine: (functional-515062) DBG | domain functional-515062 has defined IP address 192.168.50.229 and MAC address 52:54:00:72:0a:b9 in network mk-functional-515062
I0503 21:42:56.496710   22548 main.go:141] libmachine: (functional-515062) Calling .GetSSHPort
I0503 21:42:56.496886   22548 main.go:141] libmachine: (functional-515062) Calling .GetSSHKeyPath
I0503 21:42:56.497050   22548 main.go:141] libmachine: (functional-515062) Calling .GetSSHUsername
I0503 21:42:56.497175   22548 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18793-6010/.minikube/machines/functional-515062/id_rsa Username:docker}
I0503 21:42:56.580975   22548 ssh_runner.go:195] Run: sudo crictl images --output json
I0503 21:42:56.661330   22548 main.go:141] libmachine: Making call to close driver server
I0503 21:42:56.661351   22548 main.go:141] libmachine: (functional-515062) Calling .Close
I0503 21:42:56.661617   22548 main.go:141] libmachine: (functional-515062) DBG | Closing plugin on server side
I0503 21:42:56.661624   22548 main.go:141] libmachine: Successfully made call to close driver server
I0503 21:42:56.661667   22548 main.go:141] libmachine: Making call to close connection to plugin binary
I0503 21:42:56.661693   22548 main.go:141] libmachine: Making call to close driver server
I0503 21:42:56.661705   22548 main.go:141] libmachine: (functional-515062) Calling .Close
I0503 21:42:56.661916   22548 main.go:141] libmachine: Successfully made call to close driver server
I0503 21:42:56.661929   22548 main.go:141] libmachine: Making call to close connection to plugin binary
I0503 21:42:56.662004   22548 main.go:141] libmachine: (functional-515062) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-515062 image ls --format json --alsologtostderr:
[{"id":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"57236178"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:807db670f13cd1851677d7a7298ce32642ce815cbec
13205e59f76d61e297e79","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-515062"],"size":"992"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:7383c266ef252ad70806f3072ee8e63d2a16d1e6bafa6146a2da867fc7c41759","repoDigests":["docker.io/library/nginx@sha256:ed6d2c43c8fbcd3eaa44c9dab6d94cb346234476230dc1681227aa72d07181ee"],"repoTags":["docker.io/library/nginx:latest"],"size":"70991807"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigest
s":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"18182961"},{"id":"sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","repoDigests":["registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.0"],"size":"32663599"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-515062"],"size":"10823156"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03
e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.0"],"size":"31030110"},{"id":"sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.0"],"size":"19208660"},{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132
622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"27755257"},{"id":"sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","repoDigests":["registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.0"],"size":"29020473"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-515062 image ls --format json --alsologtostderr:
I0503 21:42:56.427268   22529 out.go:291] Setting OutFile to fd 1 ...
I0503 21:42:56.427708   22529 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0503 21:42:56.427749   22529 out.go:304] Setting ErrFile to fd 2...
I0503 21:42:56.427767   22529 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0503 21:42:56.427959   22529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18793-6010/.minikube/bin
I0503 21:42:56.428558   22529 config.go:182] Loaded profile config "functional-515062": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0503 21:42:56.428684   22529 config.go:182] Loaded profile config "functional-515062": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0503 21:42:56.429174   22529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0503 21:42:56.429223   22529 main.go:141] libmachine: Launching plugin server for driver kvm2
I0503 21:42:56.444716   22529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39627
I0503 21:42:56.445268   22529 main.go:141] libmachine: () Calling .GetVersion
I0503 21:42:56.446518   22529 main.go:141] libmachine: Using API Version  1
I0503 21:42:56.446549   22529 main.go:141] libmachine: () Calling .SetConfigRaw
I0503 21:42:56.446806   22529 main.go:141] libmachine: () Calling .GetMachineName
I0503 21:42:56.446977   22529 main.go:141] libmachine: (functional-515062) Calling .GetState
I0503 21:42:56.449141   22529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0503 21:42:56.449182   22529 main.go:141] libmachine: Launching plugin server for driver kvm2
I0503 21:42:56.464313   22529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46377
I0503 21:42:56.464718   22529 main.go:141] libmachine: () Calling .GetVersion
I0503 21:42:56.465204   22529 main.go:141] libmachine: Using API Version  1
I0503 21:42:56.465223   22529 main.go:141] libmachine: () Calling .SetConfigRaw
I0503 21:42:56.465542   22529 main.go:141] libmachine: () Calling .GetMachineName
I0503 21:42:56.465718   22529 main.go:141] libmachine: (functional-515062) Calling .DriverName
I0503 21:42:56.465894   22529 ssh_runner.go:195] Run: systemctl --version
I0503 21:42:56.465912   22529 main.go:141] libmachine: (functional-515062) Calling .GetSSHHostname
I0503 21:42:56.469541   22529 main.go:141] libmachine: (functional-515062) DBG | domain functional-515062 has defined MAC address 52:54:00:72:0a:b9 in network mk-functional-515062
I0503 21:42:56.470007   22529 main.go:141] libmachine: (functional-515062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:0a:b9", ip: ""} in network mk-functional-515062: {Iface:virbr1 ExpiryTime:2024-05-03 22:39:00 +0000 UTC Type:0 Mac:52:54:00:72:0a:b9 Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:functional-515062 Clientid:01:52:54:00:72:0a:b9}
I0503 21:42:56.470034   22529 main.go:141] libmachine: (functional-515062) DBG | domain functional-515062 has defined IP address 192.168.50.229 and MAC address 52:54:00:72:0a:b9 in network mk-functional-515062
I0503 21:42:56.470253   22529 main.go:141] libmachine: (functional-515062) Calling .GetSSHPort
I0503 21:42:56.470409   22529 main.go:141] libmachine: (functional-515062) Calling .GetSSHKeyPath
I0503 21:42:56.470547   22529 main.go:141] libmachine: (functional-515062) Calling .GetSSHUsername
I0503 21:42:56.470793   22529 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18793-6010/.minikube/machines/functional-515062/id_rsa Username:docker}
I0503 21:42:56.550661   22529 ssh_runner.go:195] Run: sudo crictl images --output json
I0503 21:42:56.602805   22529 main.go:141] libmachine: Making call to close driver server
I0503 21:42:56.602816   22529 main.go:141] libmachine: (functional-515062) Calling .Close
I0503 21:42:56.603111   22529 main.go:141] libmachine: Successfully made call to close driver server
I0503 21:42:56.603135   22529 main.go:141] libmachine: Making call to close connection to plugin binary
I0503 21:42:56.603140   22529 main.go:141] libmachine: (functional-515062) DBG | Closing plugin on server side
I0503 21:42:56.603151   22529 main.go:141] libmachine: Making call to close driver server
I0503 21:42:56.603162   22529 main.go:141] libmachine: (functional-515062) Calling .Close
I0503 21:42:56.603412   22529 main.go:141] libmachine: Successfully made call to close driver server
I0503 21:42:56.603431   22529 main.go:141] libmachine: Making call to close connection to plugin binary
I0503 21:42:56.603495   22529 main.go:141] libmachine: (functional-515062) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-515062 image ls --format yaml --alsologtostderr:
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "18182961"
- id: sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.0
size: "19208660"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:7383c266ef252ad70806f3072ee8e63d2a16d1e6bafa6146a2da867fc7c41759
repoDigests:
- docker.io/library/nginx@sha256:ed6d2c43c8fbcd3eaa44c9dab6d94cb346234476230dc1681227aa72d07181ee
repoTags:
- docker.io/library/nginx:latest
size: "70991807"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-515062
size: "10823156"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "57236178"
- id: sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b
repoDigests:
- registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210
repoTags:
- registry.k8s.io/kube-proxy:v1.30.0
size: "29020473"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "27755257"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.0
size: "32663599"
- id: sha256:807db670f13cd1851677d7a7298ce32642ce815cbec13205e59f76d61e297e79
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-515062
size: "992"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.0
size: "31030110"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-515062 image ls --format yaml --alsologtostderr:
I0503 21:42:56.175259   22457 out.go:291] Setting OutFile to fd 1 ...
I0503 21:42:56.177471   22457 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0503 21:42:56.177496   22457 out.go:304] Setting ErrFile to fd 2...
I0503 21:42:56.177505   22457 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0503 21:42:56.177940   22457 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18793-6010/.minikube/bin
I0503 21:42:56.178909   22457 config.go:182] Loaded profile config "functional-515062": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0503 21:42:56.179073   22457 config.go:182] Loaded profile config "functional-515062": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0503 21:42:56.179699   22457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0503 21:42:56.179762   22457 main.go:141] libmachine: Launching plugin server for driver kvm2
I0503 21:42:56.194772   22457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36861
I0503 21:42:56.195200   22457 main.go:141] libmachine: () Calling .GetVersion
I0503 21:42:56.195763   22457 main.go:141] libmachine: Using API Version  1
I0503 21:42:56.195786   22457 main.go:141] libmachine: () Calling .SetConfigRaw
I0503 21:42:56.196196   22457 main.go:141] libmachine: () Calling .GetMachineName
I0503 21:42:56.196401   22457 main.go:141] libmachine: (functional-515062) Calling .GetState
I0503 21:42:56.198299   22457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0503 21:42:56.198333   22457 main.go:141] libmachine: Launching plugin server for driver kvm2
I0503 21:42:56.214293   22457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34325
I0503 21:42:56.214696   22457 main.go:141] libmachine: () Calling .GetVersion
I0503 21:42:56.215113   22457 main.go:141] libmachine: Using API Version  1
I0503 21:42:56.215130   22457 main.go:141] libmachine: () Calling .SetConfigRaw
I0503 21:42:56.215441   22457 main.go:141] libmachine: () Calling .GetMachineName
I0503 21:42:56.215641   22457 main.go:141] libmachine: (functional-515062) Calling .DriverName
I0503 21:42:56.215850   22457 ssh_runner.go:195] Run: systemctl --version
I0503 21:42:56.215870   22457 main.go:141] libmachine: (functional-515062) Calling .GetSSHHostname
I0503 21:42:56.219154   22457 main.go:141] libmachine: (functional-515062) DBG | domain functional-515062 has defined MAC address 52:54:00:72:0a:b9 in network mk-functional-515062
I0503 21:42:56.219548   22457 main.go:141] libmachine: (functional-515062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:0a:b9", ip: ""} in network mk-functional-515062: {Iface:virbr1 ExpiryTime:2024-05-03 22:39:00 +0000 UTC Type:0 Mac:52:54:00:72:0a:b9 Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:functional-515062 Clientid:01:52:54:00:72:0a:b9}
I0503 21:42:56.219570   22457 main.go:141] libmachine: (functional-515062) DBG | domain functional-515062 has defined IP address 192.168.50.229 and MAC address 52:54:00:72:0a:b9 in network mk-functional-515062
I0503 21:42:56.219804   22457 main.go:141] libmachine: (functional-515062) Calling .GetSSHPort
I0503 21:42:56.219981   22457 main.go:141] libmachine: (functional-515062) Calling .GetSSHKeyPath
I0503 21:42:56.220109   22457 main.go:141] libmachine: (functional-515062) Calling .GetSSHUsername
I0503 21:42:56.220219   22457 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18793-6010/.minikube/machines/functional-515062/id_rsa Username:docker}
I0503 21:42:56.299055   22457 ssh_runner.go:195] Run: sudo crictl images --output json
I0503 21:42:56.354329   22457 main.go:141] libmachine: Making call to close driver server
I0503 21:42:56.354347   22457 main.go:141] libmachine: (functional-515062) Calling .Close
I0503 21:42:56.354662   22457 main.go:141] libmachine: Successfully made call to close driver server
I0503 21:42:56.354686   22457 main.go:141] libmachine: Making call to close connection to plugin binary
I0503 21:42:56.354695   22457 main.go:141] libmachine: Making call to close driver server
I0503 21:42:56.354712   22457 main.go:141] libmachine: (functional-515062) Calling .Close
I0503 21:42:56.354935   22457 main.go:141] libmachine: Successfully made call to close driver server
I0503 21:42:56.354951   22457 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-515062 ssh pgrep buildkitd: exit status 1 (244.553775ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 image build -t localhost/my-image:functional-515062 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-515062 image build -t localhost/my-image:functional-515062 testdata/build --alsologtostderr: (4.626559821s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-515062 image build -t localhost/my-image:functional-515062 testdata/build --alsologtostderr:
I0503 21:42:56.428804   22530 out.go:291] Setting OutFile to fd 1 ...
I0503 21:42:56.429017   22530 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0503 21:42:56.429030   22530 out.go:304] Setting ErrFile to fd 2...
I0503 21:42:56.429036   22530 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0503 21:42:56.429326   22530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18793-6010/.minikube/bin
I0503 21:42:56.430042   22530 config.go:182] Loaded profile config "functional-515062": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0503 21:42:56.430701   22530 config.go:182] Loaded profile config "functional-515062": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0503 21:42:56.431277   22530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0503 21:42:56.431344   22530 main.go:141] libmachine: Launching plugin server for driver kvm2
I0503 21:42:56.445603   22530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44509
I0503 21:42:56.446035   22530 main.go:141] libmachine: () Calling .GetVersion
I0503 21:42:56.446638   22530 main.go:141] libmachine: Using API Version  1
I0503 21:42:56.446686   22530 main.go:141] libmachine: () Calling .SetConfigRaw
I0503 21:42:56.447050   22530 main.go:141] libmachine: () Calling .GetMachineName
I0503 21:42:56.447231   22530 main.go:141] libmachine: (functional-515062) Calling .GetState
I0503 21:42:56.449141   22530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0503 21:42:56.449182   22530 main.go:141] libmachine: Launching plugin server for driver kvm2
I0503 21:42:56.463521   22530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45807
I0503 21:42:56.464106   22530 main.go:141] libmachine: () Calling .GetVersion
I0503 21:42:56.464626   22530 main.go:141] libmachine: Using API Version  1
I0503 21:42:56.464649   22530 main.go:141] libmachine: () Calling .SetConfigRaw
I0503 21:42:56.464975   22530 main.go:141] libmachine: () Calling .GetMachineName
I0503 21:42:56.465266   22530 main.go:141] libmachine: (functional-515062) Calling .DriverName
I0503 21:42:56.465463   22530 ssh_runner.go:195] Run: systemctl --version
I0503 21:42:56.465493   22530 main.go:141] libmachine: (functional-515062) Calling .GetSSHHostname
I0503 21:42:56.469007   22530 main.go:141] libmachine: (functional-515062) DBG | domain functional-515062 has defined MAC address 52:54:00:72:0a:b9 in network mk-functional-515062
I0503 21:42:56.469371   22530 main.go:141] libmachine: (functional-515062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:0a:b9", ip: ""} in network mk-functional-515062: {Iface:virbr1 ExpiryTime:2024-05-03 22:39:00 +0000 UTC Type:0 Mac:52:54:00:72:0a:b9 Iaid: IPaddr:192.168.50.229 Prefix:24 Hostname:functional-515062 Clientid:01:52:54:00:72:0a:b9}
I0503 21:42:56.469404   22530 main.go:141] libmachine: (functional-515062) DBG | domain functional-515062 has defined IP address 192.168.50.229 and MAC address 52:54:00:72:0a:b9 in network mk-functional-515062
I0503 21:42:56.469534   22530 main.go:141] libmachine: (functional-515062) Calling .GetSSHPort
I0503 21:42:56.469681   22530 main.go:141] libmachine: (functional-515062) Calling .GetSSHKeyPath
I0503 21:42:56.469803   22530 main.go:141] libmachine: (functional-515062) Calling .GetSSHUsername
I0503 21:42:56.469941   22530 sshutil.go:53] new ssh client: &{IP:192.168.50.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18793-6010/.minikube/machines/functional-515062/id_rsa Username:docker}
I0503 21:42:56.552443   22530 build_images.go:161] Building image from path: /tmp/build.3214438494.tar
I0503 21:42:56.552510   22530 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0503 21:42:56.565913   22530 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3214438494.tar
I0503 21:42:56.572985   22530 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3214438494.tar: stat -c "%s %y" /var/lib/minikube/build/build.3214438494.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3214438494.tar': No such file or directory
I0503 21:42:56.573022   22530 ssh_runner.go:362] scp /tmp/build.3214438494.tar --> /var/lib/minikube/build/build.3214438494.tar (3072 bytes)
I0503 21:42:56.643217   22530 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3214438494
I0503 21:42:56.661481   22530 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3214438494 -xf /var/lib/minikube/build/build.3214438494.tar
I0503 21:42:56.676739   22530 containerd.go:394] Building image: /var/lib/minikube/build/build.3214438494
I0503 21:42:56.676796   22530 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3214438494 --local dockerfile=/var/lib/minikube/build/build.3214438494 --output type=image,name=localhost/my-image:functional-515062
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.1s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 1.2s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:d01bd74bea8b5e391a17a48c4425a0143d2f1f458deb9ccfe11e1ece02df13e5
#8 exporting manifest sha256:d01bd74bea8b5e391a17a48c4425a0143d2f1f458deb9ccfe11e1ece02df13e5 0.0s done
#8 exporting config sha256:2a69ea770a4dbfa8247fc6a1fef8a9b0ce44f2aa69e83414d717a96ab6290c63 0.0s done
#8 naming to localhost/my-image:functional-515062 done
#8 DONE 0.2s
I0503 21:43:00.952611   22530 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3214438494 --local dockerfile=/var/lib/minikube/build/build.3214438494 --output type=image,name=localhost/my-image:functional-515062: (4.275782784s)
I0503 21:43:00.952689   22530 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3214438494
I0503 21:43:00.972102   22530 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3214438494.tar
I0503 21:43:00.984536   22530 build_images.go:217] Built localhost/my-image:functional-515062 from /tmp/build.3214438494.tar
I0503 21:43:00.984565   22530 build_images.go:133] succeeded building to: functional-515062
I0503 21:43:00.984571   22530 build_images.go:134] failed building to: 
I0503 21:43:00.984593   22530 main.go:141] libmachine: Making call to close driver server
I0503 21:43:00.984604   22530 main.go:141] libmachine: (functional-515062) Calling .Close
I0503 21:43:00.984869   22530 main.go:141] libmachine: Successfully made call to close driver server
I0503 21:43:00.984894   22530 main.go:141] libmachine: Making call to close connection to plugin binary
I0503 21:43:00.984908   22530 main.go:141] libmachine: Making call to close driver server
I0503 21:43:00.984915   22530 main.go:141] libmachine: (functional-515062) Calling .Close
I0503 21:43:00.985193   22530 main.go:141] libmachine: Successfully made call to close driver server
I0503 21:43:00.985194   22530 main.go:141] libmachine: (functional-515062) DBG | Closing plugin on server side
I0503 21:43:00.985213   22530 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.755515653s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-515062
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.78s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "349.745953ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "67.476206ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 image load --daemon gcr.io/google-containers/addon-resizer:functional-515062 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-515062 image load --daemon gcr.io/google-containers/addon-resizer:functional-515062 --alsologtostderr: (4.640539904s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-linux-amd64 -p functional-515062 service list -o json: (1.351291869s)
functional_test.go:1490: Took "1.351394956s" to run "out/minikube-linux-amd64 -p functional-515062 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.50.229:32511
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.50.229:32511
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 image load --daemon gcr.io/google-containers/addon-resizer:functional-515062 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-515062 image load --daemon gcr.io/google-containers/addon-resizer:functional-515062 --alsologtostderr: (2.928124725s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (3.214280962s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-515062
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 image load --daemon gcr.io/google-containers/addon-resizer:functional-515062 --alsologtostderr
2024/05/03 21:42:49 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-515062 image load --daemon gcr.io/google-containers/addon-resizer:functional-515062 --alsologtostderr: (4.35787039s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 image save gcr.io/google-containers/addon-resizer:functional-515062 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-515062 image save gcr.io/google-containers/addon-resizer:functional-515062 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.152137541s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 image rm gcr.io/google-containers/addon-resizer:functional-515062 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-515062 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.35085815s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-515062
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-515062 image save --daemon gcr.io/google-containers/addon-resizer:functional-515062 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-515062 image save --daemon gcr.io/google-containers/addon-resizer:functional-515062 --alsologtostderr: (1.081747282s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-515062
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.11s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-515062
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-515062
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-515062
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (276.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-250305 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0503 21:44:45.607271   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/client.crt: no such file or directory
E0503 21:45:13.293002   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/client.crt: no such file or directory
E0503 21:47:05.371700   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/functional-515062/client.crt: no such file or directory
E0503 21:47:05.377004   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/functional-515062/client.crt: no such file or directory
E0503 21:47:05.387274   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/functional-515062/client.crt: no such file or directory
E0503 21:47:05.407734   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/functional-515062/client.crt: no such file or directory
E0503 21:47:05.447995   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/functional-515062/client.crt: no such file or directory
E0503 21:47:05.528300   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/functional-515062/client.crt: no such file or directory
E0503 21:47:05.688798   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/functional-515062/client.crt: no such file or directory
E0503 21:47:06.009342   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/functional-515062/client.crt: no such file or directory
E0503 21:47:06.650291   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/functional-515062/client.crt: no such file or directory
E0503 21:47:07.931030   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/functional-515062/client.crt: no such file or directory
E0503 21:47:10.491781   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/functional-515062/client.crt: no such file or directory
E0503 21:47:15.612334   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/functional-515062/client.crt: no such file or directory
E0503 21:47:25.853555   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/functional-515062/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-250305 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (4m35.890641305s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (276.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-250305 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-250305 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-250305 -- rollout status deployment/busybox: (4.495743601s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-250305 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-250305 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-250305 -- exec busybox-fc5497c4f-8k6q8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-250305 -- exec busybox-fc5497c4f-crtg6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-250305 -- exec busybox-fc5497c4f-f62tg -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-250305 -- exec busybox-fc5497c4f-8k6q8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-250305 -- exec busybox-fc5497c4f-crtg6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-250305 -- exec busybox-fc5497c4f-f62tg -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-250305 -- exec busybox-fc5497c4f-8k6q8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-250305 -- exec busybox-fc5497c4f-crtg6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-250305 -- exec busybox-fc5497c4f-f62tg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-250305 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-250305 -- exec busybox-fc5497c4f-8k6q8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-250305 -- exec busybox-fc5497c4f-8k6q8 -- sh -c "ping -c 1 192.168.39.1"
E0503 21:47:46.334294   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/functional-515062/client.crt: no such file or directory
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-250305 -- exec busybox-fc5497c4f-crtg6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-250305 -- exec busybox-fc5497c4f-crtg6 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-250305 -- exec busybox-fc5497c4f-f62tg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-250305 -- exec busybox-fc5497c4f-f62tg -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (49.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-250305 -v=7 --alsologtostderr
E0503 21:48:27.294451   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/functional-515062/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-250305 -v=7 --alsologtostderr: (48.761038635s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (49.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-250305 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 cp testdata/cp-test.txt ha-250305:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 ssh -n ha-250305 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 cp ha-250305:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile567454867/001/cp-test_ha-250305.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 ssh -n ha-250305 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 cp ha-250305:/home/docker/cp-test.txt ha-250305-m02:/home/docker/cp-test_ha-250305_ha-250305-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 ssh -n ha-250305 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 ssh -n ha-250305-m02 "sudo cat /home/docker/cp-test_ha-250305_ha-250305-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 cp ha-250305:/home/docker/cp-test.txt ha-250305-m03:/home/docker/cp-test_ha-250305_ha-250305-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 ssh -n ha-250305 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 ssh -n ha-250305-m03 "sudo cat /home/docker/cp-test_ha-250305_ha-250305-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 cp ha-250305:/home/docker/cp-test.txt ha-250305-m04:/home/docker/cp-test_ha-250305_ha-250305-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 ssh -n ha-250305 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 ssh -n ha-250305-m04 "sudo cat /home/docker/cp-test_ha-250305_ha-250305-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 cp testdata/cp-test.txt ha-250305-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 ssh -n ha-250305-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 cp ha-250305-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile567454867/001/cp-test_ha-250305-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 ssh -n ha-250305-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 cp ha-250305-m02:/home/docker/cp-test.txt ha-250305:/home/docker/cp-test_ha-250305-m02_ha-250305.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 ssh -n ha-250305-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 ssh -n ha-250305 "sudo cat /home/docker/cp-test_ha-250305-m02_ha-250305.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 cp ha-250305-m02:/home/docker/cp-test.txt ha-250305-m03:/home/docker/cp-test_ha-250305-m02_ha-250305-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 ssh -n ha-250305-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 ssh -n ha-250305-m03 "sudo cat /home/docker/cp-test_ha-250305-m02_ha-250305-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 cp ha-250305-m02:/home/docker/cp-test.txt ha-250305-m04:/home/docker/cp-test_ha-250305-m02_ha-250305-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 ssh -n ha-250305-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 ssh -n ha-250305-m04 "sudo cat /home/docker/cp-test_ha-250305-m02_ha-250305-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 cp testdata/cp-test.txt ha-250305-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 ssh -n ha-250305-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 cp ha-250305-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile567454867/001/cp-test_ha-250305-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 ssh -n ha-250305-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 cp ha-250305-m03:/home/docker/cp-test.txt ha-250305:/home/docker/cp-test_ha-250305-m03_ha-250305.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 ssh -n ha-250305-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 ssh -n ha-250305 "sudo cat /home/docker/cp-test_ha-250305-m03_ha-250305.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 cp ha-250305-m03:/home/docker/cp-test.txt ha-250305-m02:/home/docker/cp-test_ha-250305-m03_ha-250305-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 ssh -n ha-250305-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 ssh -n ha-250305-m02 "sudo cat /home/docker/cp-test_ha-250305-m03_ha-250305-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 cp ha-250305-m03:/home/docker/cp-test.txt ha-250305-m04:/home/docker/cp-test_ha-250305-m03_ha-250305-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 ssh -n ha-250305-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 ssh -n ha-250305-m04 "sudo cat /home/docker/cp-test_ha-250305-m03_ha-250305-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 cp testdata/cp-test.txt ha-250305-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 ssh -n ha-250305-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 cp ha-250305-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile567454867/001/cp-test_ha-250305-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 ssh -n ha-250305-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 cp ha-250305-m04:/home/docker/cp-test.txt ha-250305:/home/docker/cp-test_ha-250305-m04_ha-250305.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 ssh -n ha-250305-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 ssh -n ha-250305 "sudo cat /home/docker/cp-test_ha-250305-m04_ha-250305.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 cp ha-250305-m04:/home/docker/cp-test.txt ha-250305-m02:/home/docker/cp-test_ha-250305-m04_ha-250305-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 ssh -n ha-250305-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 ssh -n ha-250305-m02 "sudo cat /home/docker/cp-test_ha-250305-m04_ha-250305-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 cp ha-250305-m04:/home/docker/cp-test.txt ha-250305-m03:/home/docker/cp-test_ha-250305-m04_ha-250305-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 ssh -n ha-250305-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 ssh -n ha-250305-m03 "sudo cat /home/docker/cp-test_ha-250305-m04_ha-250305-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (93.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 node stop m02 -v=7 --alsologtostderr
E0503 21:49:45.606884   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/client.crt: no such file or directory
E0503 21:49:49.215582   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/functional-515062/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-250305 node stop m02 -v=7 --alsologtostderr: (1m32.446396781s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-250305 status -v=7 --alsologtostderr: exit status 7 (660.281286ms)

                                                
                                                
-- stdout --
	ha-250305
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-250305-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-250305-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-250305-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 21:50:23.510691   27374 out.go:291] Setting OutFile to fd 1 ...
	I0503 21:50:23.510856   27374 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 21:50:23.510867   27374 out.go:304] Setting ErrFile to fd 2...
	I0503 21:50:23.510873   27374 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 21:50:23.511083   27374 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18793-6010/.minikube/bin
	I0503 21:50:23.511267   27374 out.go:298] Setting JSON to false
	I0503 21:50:23.511302   27374 mustload.go:65] Loading cluster: ha-250305
	I0503 21:50:23.511366   27374 notify.go:220] Checking for updates...
	I0503 21:50:23.511771   27374 config.go:182] Loaded profile config "ha-250305": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0503 21:50:23.511788   27374 status.go:255] checking status of ha-250305 ...
	I0503 21:50:23.512181   27374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:50:23.512248   27374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:50:23.531967   27374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33263
	I0503 21:50:23.532423   27374 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:50:23.533160   27374 main.go:141] libmachine: Using API Version  1
	I0503 21:50:23.533187   27374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:50:23.533646   27374 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:50:23.533863   27374 main.go:141] libmachine: (ha-250305) Calling .GetState
	I0503 21:50:23.535625   27374 status.go:330] ha-250305 host status = "Running" (err=<nil>)
	I0503 21:50:23.535643   27374 host.go:66] Checking if "ha-250305" exists ...
	I0503 21:50:23.535949   27374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:50:23.536057   27374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:50:23.550299   27374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42895
	I0503 21:50:23.550700   27374 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:50:23.551186   27374 main.go:141] libmachine: Using API Version  1
	I0503 21:50:23.551211   27374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:50:23.551644   27374 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:50:23.551824   27374 main.go:141] libmachine: (ha-250305) Calling .GetIP
	I0503 21:50:23.555162   27374 main.go:141] libmachine: (ha-250305) DBG | domain ha-250305 has defined MAC address 52:54:00:8e:8f:48 in network mk-ha-250305
	I0503 21:50:23.555596   27374 main.go:141] libmachine: (ha-250305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:8f:48", ip: ""} in network mk-ha-250305: {Iface:virbr1 ExpiryTime:2024-05-03 22:43:18 +0000 UTC Type:0 Mac:52:54:00:8e:8f:48 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-250305 Clientid:01:52:54:00:8e:8f:48}
	I0503 21:50:23.555625   27374 main.go:141] libmachine: (ha-250305) DBG | domain ha-250305 has defined IP address 192.168.39.39 and MAC address 52:54:00:8e:8f:48 in network mk-ha-250305
	I0503 21:50:23.555807   27374 host.go:66] Checking if "ha-250305" exists ...
	I0503 21:50:23.556134   27374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:50:23.556174   27374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:50:23.570107   27374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40623
	I0503 21:50:23.570466   27374 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:50:23.570850   27374 main.go:141] libmachine: Using API Version  1
	I0503 21:50:23.570879   27374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:50:23.571220   27374 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:50:23.571398   27374 main.go:141] libmachine: (ha-250305) Calling .DriverName
	I0503 21:50:23.571555   27374 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0503 21:50:23.571583   27374 main.go:141] libmachine: (ha-250305) Calling .GetSSHHostname
	I0503 21:50:23.574355   27374 main.go:141] libmachine: (ha-250305) DBG | domain ha-250305 has defined MAC address 52:54:00:8e:8f:48 in network mk-ha-250305
	I0503 21:50:23.574782   27374 main.go:141] libmachine: (ha-250305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:8f:48", ip: ""} in network mk-ha-250305: {Iface:virbr1 ExpiryTime:2024-05-03 22:43:18 +0000 UTC Type:0 Mac:52:54:00:8e:8f:48 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-250305 Clientid:01:52:54:00:8e:8f:48}
	I0503 21:50:23.574815   27374 main.go:141] libmachine: (ha-250305) DBG | domain ha-250305 has defined IP address 192.168.39.39 and MAC address 52:54:00:8e:8f:48 in network mk-ha-250305
	I0503 21:50:23.574996   27374 main.go:141] libmachine: (ha-250305) Calling .GetSSHPort
	I0503 21:50:23.575161   27374 main.go:141] libmachine: (ha-250305) Calling .GetSSHKeyPath
	I0503 21:50:23.575296   27374 main.go:141] libmachine: (ha-250305) Calling .GetSSHUsername
	I0503 21:50:23.575429   27374 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18793-6010/.minikube/machines/ha-250305/id_rsa Username:docker}
	I0503 21:50:23.666368   27374 ssh_runner.go:195] Run: systemctl --version
	I0503 21:50:23.673831   27374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0503 21:50:23.693495   27374 kubeconfig.go:125] found "ha-250305" server: "https://192.168.39.254:8443"
	I0503 21:50:23.693520   27374 api_server.go:166] Checking apiserver status ...
	I0503 21:50:23.693550   27374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0503 21:50:23.709892   27374 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1142/cgroup
	W0503 21:50:23.721186   27374 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1142/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0503 21:50:23.721253   27374 ssh_runner.go:195] Run: ls
	I0503 21:50:23.727589   27374 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0503 21:50:23.734117   27374 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0503 21:50:23.734140   27374 status.go:422] ha-250305 apiserver status = Running (err=<nil>)
	I0503 21:50:23.734169   27374 status.go:257] ha-250305 status: &{Name:ha-250305 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0503 21:50:23.734192   27374 status.go:255] checking status of ha-250305-m02 ...
	I0503 21:50:23.734572   27374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:50:23.734616   27374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:50:23.749044   27374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40135
	I0503 21:50:23.749465   27374 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:50:23.749911   27374 main.go:141] libmachine: Using API Version  1
	I0503 21:50:23.749933   27374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:50:23.750264   27374 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:50:23.750448   27374 main.go:141] libmachine: (ha-250305-m02) Calling .GetState
	I0503 21:50:23.751873   27374 status.go:330] ha-250305-m02 host status = "Stopped" (err=<nil>)
	I0503 21:50:23.751887   27374 status.go:343] host is not running, skipping remaining checks
	I0503 21:50:23.751894   27374 status.go:257] ha-250305-m02 status: &{Name:ha-250305-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0503 21:50:23.751911   27374 status.go:255] checking status of ha-250305-m03 ...
	I0503 21:50:23.752212   27374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:50:23.752253   27374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:50:23.766916   27374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43597
	I0503 21:50:23.767275   27374 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:50:23.767783   27374 main.go:141] libmachine: Using API Version  1
	I0503 21:50:23.767810   27374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:50:23.768086   27374 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:50:23.768288   27374 main.go:141] libmachine: (ha-250305-m03) Calling .GetState
	I0503 21:50:23.769792   27374 status.go:330] ha-250305-m03 host status = "Running" (err=<nil>)
	I0503 21:50:23.769807   27374 host.go:66] Checking if "ha-250305-m03" exists ...
	I0503 21:50:23.770213   27374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:50:23.770255   27374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:50:23.784136   27374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39261
	I0503 21:50:23.785128   27374 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:50:23.786239   27374 main.go:141] libmachine: Using API Version  1
	I0503 21:50:23.786267   27374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:50:23.786597   27374 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:50:23.786794   27374 main.go:141] libmachine: (ha-250305-m03) Calling .GetIP
	I0503 21:50:23.789654   27374 main.go:141] libmachine: (ha-250305-m03) DBG | domain ha-250305-m03 has defined MAC address 52:54:00:ca:96:06 in network mk-ha-250305
	I0503 21:50:23.790100   27374 main.go:141] libmachine: (ha-250305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:96:06", ip: ""} in network mk-ha-250305: {Iface:virbr1 ExpiryTime:2024-05-03 22:46:43 +0000 UTC Type:0 Mac:52:54:00:ca:96:06 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-250305-m03 Clientid:01:52:54:00:ca:96:06}
	I0503 21:50:23.790137   27374 main.go:141] libmachine: (ha-250305-m03) DBG | domain ha-250305-m03 has defined IP address 192.168.39.22 and MAC address 52:54:00:ca:96:06 in network mk-ha-250305
	I0503 21:50:23.790181   27374 host.go:66] Checking if "ha-250305-m03" exists ...
	I0503 21:50:23.790530   27374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:50:23.790573   27374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:50:23.804601   27374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45671
	I0503 21:50:23.804968   27374 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:50:23.805362   27374 main.go:141] libmachine: Using API Version  1
	I0503 21:50:23.805394   27374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:50:23.805723   27374 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:50:23.805919   27374 main.go:141] libmachine: (ha-250305-m03) Calling .DriverName
	I0503 21:50:23.806120   27374 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0503 21:50:23.806147   27374 main.go:141] libmachine: (ha-250305-m03) Calling .GetSSHHostname
	I0503 21:50:23.808596   27374 main.go:141] libmachine: (ha-250305-m03) DBG | domain ha-250305-m03 has defined MAC address 52:54:00:ca:96:06 in network mk-ha-250305
	I0503 21:50:23.808937   27374 main.go:141] libmachine: (ha-250305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:96:06", ip: ""} in network mk-ha-250305: {Iface:virbr1 ExpiryTime:2024-05-03 22:46:43 +0000 UTC Type:0 Mac:52:54:00:ca:96:06 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-250305-m03 Clientid:01:52:54:00:ca:96:06}
	I0503 21:50:23.808961   27374 main.go:141] libmachine: (ha-250305-m03) DBG | domain ha-250305-m03 has defined IP address 192.168.39.22 and MAC address 52:54:00:ca:96:06 in network mk-ha-250305
	I0503 21:50:23.809068   27374 main.go:141] libmachine: (ha-250305-m03) Calling .GetSSHPort
	I0503 21:50:23.809235   27374 main.go:141] libmachine: (ha-250305-m03) Calling .GetSSHKeyPath
	I0503 21:50:23.809381   27374 main.go:141] libmachine: (ha-250305-m03) Calling .GetSSHUsername
	I0503 21:50:23.809504   27374 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18793-6010/.minikube/machines/ha-250305-m03/id_rsa Username:docker}
	I0503 21:50:23.892921   27374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0503 21:50:23.912680   27374 kubeconfig.go:125] found "ha-250305" server: "https://192.168.39.254:8443"
	I0503 21:50:23.912707   27374 api_server.go:166] Checking apiserver status ...
	I0503 21:50:23.912746   27374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0503 21:50:23.929131   27374 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1182/cgroup
	W0503 21:50:23.943917   27374 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1182/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0503 21:50:23.943959   27374 ssh_runner.go:195] Run: ls
	I0503 21:50:23.949185   27374 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0503 21:50:23.953580   27374 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0503 21:50:23.953599   27374 status.go:422] ha-250305-m03 apiserver status = Running (err=<nil>)
	I0503 21:50:23.953613   27374 status.go:257] ha-250305-m03 status: &{Name:ha-250305-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0503 21:50:23.953626   27374 status.go:255] checking status of ha-250305-m04 ...
	I0503 21:50:23.953949   27374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:50:23.953989   27374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:50:23.968430   27374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38659
	I0503 21:50:23.968787   27374 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:50:23.969190   27374 main.go:141] libmachine: Using API Version  1
	I0503 21:50:23.969210   27374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:50:23.969480   27374 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:50:23.969667   27374 main.go:141] libmachine: (ha-250305-m04) Calling .GetState
	I0503 21:50:23.971208   27374 status.go:330] ha-250305-m04 host status = "Running" (err=<nil>)
	I0503 21:50:23.971224   27374 host.go:66] Checking if "ha-250305-m04" exists ...
	I0503 21:50:23.971547   27374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:50:23.971583   27374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:50:23.985698   27374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36969
	I0503 21:50:23.986026   27374 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:50:23.986497   27374 main.go:141] libmachine: Using API Version  1
	I0503 21:50:23.986514   27374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:50:23.986838   27374 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:50:23.987032   27374 main.go:141] libmachine: (ha-250305-m04) Calling .GetIP
	I0503 21:50:23.989670   27374 main.go:141] libmachine: (ha-250305-m04) DBG | domain ha-250305-m04 has defined MAC address 52:54:00:6b:4a:e7 in network mk-ha-250305
	I0503 21:50:23.990237   27374 main.go:141] libmachine: (ha-250305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:4a:e7", ip: ""} in network mk-ha-250305: {Iface:virbr1 ExpiryTime:2024-05-03 22:48:04 +0000 UTC Type:0 Mac:52:54:00:6b:4a:e7 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-250305-m04 Clientid:01:52:54:00:6b:4a:e7}
	I0503 21:50:23.990269   27374 main.go:141] libmachine: (ha-250305-m04) DBG | domain ha-250305-m04 has defined IP address 192.168.39.228 and MAC address 52:54:00:6b:4a:e7 in network mk-ha-250305
	I0503 21:50:23.990406   27374 host.go:66] Checking if "ha-250305-m04" exists ...
	I0503 21:50:23.990662   27374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 21:50:23.990691   27374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 21:50:24.005298   27374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44513
	I0503 21:50:24.005656   27374 main.go:141] libmachine: () Calling .GetVersion
	I0503 21:50:24.006053   27374 main.go:141] libmachine: Using API Version  1
	I0503 21:50:24.006076   27374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 21:50:24.006357   27374 main.go:141] libmachine: () Calling .GetMachineName
	I0503 21:50:24.006564   27374 main.go:141] libmachine: (ha-250305-m04) Calling .DriverName
	I0503 21:50:24.006720   27374 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0503 21:50:24.006736   27374 main.go:141] libmachine: (ha-250305-m04) Calling .GetSSHHostname
	I0503 21:50:24.009169   27374 main.go:141] libmachine: (ha-250305-m04) DBG | domain ha-250305-m04 has defined MAC address 52:54:00:6b:4a:e7 in network mk-ha-250305
	I0503 21:50:24.009566   27374 main.go:141] libmachine: (ha-250305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:4a:e7", ip: ""} in network mk-ha-250305: {Iface:virbr1 ExpiryTime:2024-05-03 22:48:04 +0000 UTC Type:0 Mac:52:54:00:6b:4a:e7 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-250305-m04 Clientid:01:52:54:00:6b:4a:e7}
	I0503 21:50:24.009594   27374 main.go:141] libmachine: (ha-250305-m04) DBG | domain ha-250305-m04 has defined IP address 192.168.39.228 and MAC address 52:54:00:6b:4a:e7 in network mk-ha-250305
	I0503 21:50:24.009732   27374 main.go:141] libmachine: (ha-250305-m04) Calling .GetSSHPort
	I0503 21:50:24.009891   27374 main.go:141] libmachine: (ha-250305-m04) Calling .GetSSHKeyPath
	I0503 21:50:24.010042   27374 main.go:141] libmachine: (ha-250305-m04) Calling .GetSSHUsername
	I0503 21:50:24.010187   27374 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18793-6010/.minikube/machines/ha-250305-m04/id_rsa Username:docker}
	I0503 21:50:24.096810   27374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0503 21:50:24.114984   27374 status.go:257] ha-250305-m04 status: &{Name:ha-250305-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (93.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (41.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-250305 node start m02 -v=7 --alsologtostderr: (40.262173619s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (41.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (487.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-250305 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-250305 -v=7 --alsologtostderr
E0503 21:52:05.370854   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/functional-515062/client.crt: no such file or directory
E0503 21:52:33.056722   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/functional-515062/client.crt: no such file or directory
E0503 21:54:45.606357   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-250305 -v=7 --alsologtostderr: (4m37.940074395s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-250305 --wait=true -v=7 --alsologtostderr
E0503 21:56:08.653281   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/client.crt: no such file or directory
E0503 21:57:05.370836   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/functional-515062/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-250305 --wait=true -v=7 --alsologtostderr: (3m29.675562332s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-250305
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (487.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (8.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-250305 node delete m03 -v=7 --alsologtostderr: (7.257538869s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (8.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (275.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 stop -v=7 --alsologtostderr
E0503 21:59:45.606831   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/client.crt: no such file or directory
E0503 22:02:05.370938   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/functional-515062/client.crt: no such file or directory
E0503 22:03:28.417921   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/functional-515062/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-250305 stop -v=7 --alsologtostderr: (4m35.681203809s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-250305 status -v=7 --alsologtostderr: exit status 7 (122.365837ms)

                                                
                                                
-- stdout --
	ha-250305
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-250305-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-250305-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 22:03:58.172121   31343 out.go:291] Setting OutFile to fd 1 ...
	I0503 22:03:58.172418   31343 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 22:03:58.172429   31343 out.go:304] Setting ErrFile to fd 2...
	I0503 22:03:58.172433   31343 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 22:03:58.172607   31343 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18793-6010/.minikube/bin
	I0503 22:03:58.172817   31343 out.go:298] Setting JSON to false
	I0503 22:03:58.172845   31343 mustload.go:65] Loading cluster: ha-250305
	I0503 22:03:58.172991   31343 notify.go:220] Checking for updates...
	I0503 22:03:58.173383   31343 config.go:182] Loaded profile config "ha-250305": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0503 22:03:58.173406   31343 status.go:255] checking status of ha-250305 ...
	I0503 22:03:58.173908   31343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 22:03:58.173966   31343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 22:03:58.198054   31343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36639
	I0503 22:03:58.198516   31343 main.go:141] libmachine: () Calling .GetVersion
	I0503 22:03:58.199140   31343 main.go:141] libmachine: Using API Version  1
	I0503 22:03:58.199170   31343 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 22:03:58.199514   31343 main.go:141] libmachine: () Calling .GetMachineName
	I0503 22:03:58.199725   31343 main.go:141] libmachine: (ha-250305) Calling .GetState
	I0503 22:03:58.201537   31343 status.go:330] ha-250305 host status = "Stopped" (err=<nil>)
	I0503 22:03:58.201551   31343 status.go:343] host is not running, skipping remaining checks
	I0503 22:03:58.201558   31343 status.go:257] ha-250305 status: &{Name:ha-250305 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0503 22:03:58.201586   31343 status.go:255] checking status of ha-250305-m02 ...
	I0503 22:03:58.201870   31343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 22:03:58.201904   31343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 22:03:58.216668   31343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35373
	I0503 22:03:58.217157   31343 main.go:141] libmachine: () Calling .GetVersion
	I0503 22:03:58.217625   31343 main.go:141] libmachine: Using API Version  1
	I0503 22:03:58.217647   31343 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 22:03:58.217982   31343 main.go:141] libmachine: () Calling .GetMachineName
	I0503 22:03:58.218176   31343 main.go:141] libmachine: (ha-250305-m02) Calling .GetState
	I0503 22:03:58.219772   31343 status.go:330] ha-250305-m02 host status = "Stopped" (err=<nil>)
	I0503 22:03:58.219787   31343 status.go:343] host is not running, skipping remaining checks
	I0503 22:03:58.219795   31343 status.go:257] ha-250305-m02 status: &{Name:ha-250305-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0503 22:03:58.219815   31343 status.go:255] checking status of ha-250305-m04 ...
	I0503 22:03:58.220105   31343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 22:03:58.220147   31343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 22:03:58.234965   31343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42859
	I0503 22:03:58.235455   31343 main.go:141] libmachine: () Calling .GetVersion
	I0503 22:03:58.236020   31343 main.go:141] libmachine: Using API Version  1
	I0503 22:03:58.236048   31343 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 22:03:58.236414   31343 main.go:141] libmachine: () Calling .GetMachineName
	I0503 22:03:58.236674   31343 main.go:141] libmachine: (ha-250305-m04) Calling .GetState
	I0503 22:03:58.238308   31343 status.go:330] ha-250305-m04 host status = "Stopped" (err=<nil>)
	I0503 22:03:58.238321   31343 status.go:343] host is not running, skipping remaining checks
	I0503 22:03:58.238328   31343 status.go:257] ha-250305-m04 status: &{Name:ha-250305-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (275.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (159.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-250305 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0503 22:04:45.606627   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-250305 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m38.358109356s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (159.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-250305 --control-plane -v=7 --alsologtostderr
E0503 22:07:05.371233   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/functional-515062/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-250305 --control-plane -v=7 --alsologtostderr: (1m15.318739322s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-250305 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.55s)

                                                
                                    
x
+
TestJSONOutput/start/Command (57.24s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-277491 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-277491 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (57.23578109s)
--- PASS: TestJSONOutput/start/Command (57.24s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-277491 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-277491 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (2.32s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-277491 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-277491 --output=json --user=testUser: (2.323681556s)
--- PASS: TestJSONOutput/stop/Command (2.32s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-993856 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-993856 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (82.444359ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d7d61b3a-bda4-4b37-95e7-985c786fd2d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-993856] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ac302c16-f45e-49d9-b5b7-417beb3c3b2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18793"}}
	{"specversion":"1.0","id":"ed67cb86-79ec-4225-b895-321a7b8ac42c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fb57d851-61a3-4115-b8d3-07210d5d7d6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18793-6010/kubeconfig"}}
	{"specversion":"1.0","id":"badd96eb-3fdc-4a0b-9f51-2c41a6e3a691","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18793-6010/.minikube"}}
	{"specversion":"1.0","id":"754238cd-25a8-473c-b9af-128df3cc16dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"357d241c-eb6c-40ff-bf5a-8c692d686c82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0f9b7c20-c13b-4219-ad26-5397e6533a9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-993856" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-993856
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (95.18s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-395717 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-395717 --driver=kvm2  --container-runtime=containerd: (44.888362751s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-398121 --driver=kvm2  --container-runtime=containerd
E0503 22:09:45.606925   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-398121 --driver=kvm2  --container-runtime=containerd: (47.580549785s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-395717
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-398121
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-398121" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-398121
helpers_test.go:175: Cleaning up "first-395717" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-395717
--- PASS: TestMinikubeProfile (95.18s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (31.75s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-568698 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-568698 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (30.752825111s)
--- PASS: TestMountStart/serial/StartWithMountFirst (31.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-568698 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-568698 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (31.14s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-585158 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-585158 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (30.13730475s)
--- PASS: TestMountStart/serial/StartWithMountSecond (31.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-585158 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-585158 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-568698 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-585158 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-585158 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.42s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-585158
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-585158: (1.416603158s)
--- PASS: TestMountStart/serial/Stop (1.42s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.17s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-585158
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-585158: (23.174479932s)
--- PASS: TestMountStart/serial/RestartStopped (24.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-585158 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-585158 ssh -- mount | grep 9p
E0503 22:12:05.371065   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/functional-515062/client.crt: no such file or directory
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (101.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-302305 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0503 22:12:48.653980   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-302305 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m40.761807219s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (101.19s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-302305 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-302305 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-302305 -- rollout status deployment/busybox: (5.080028642s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-302305 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-302305 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-302305 -- exec busybox-fc5497c4f-s2kml -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-302305 -- exec busybox-fc5497c4f-w2tl2 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-302305 -- exec busybox-fc5497c4f-s2kml -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-302305 -- exec busybox-fc5497c4f-w2tl2 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-302305 -- exec busybox-fc5497c4f-s2kml -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-302305 -- exec busybox-fc5497c4f-w2tl2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.69s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-302305 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-302305 -- exec busybox-fc5497c4f-s2kml -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-302305 -- exec busybox-fc5497c4f-s2kml -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-302305 -- exec busybox-fc5497c4f-w2tl2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-302305 -- exec busybox-fc5497c4f-w2tl2 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (43.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-302305 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-302305 -v 3 --alsologtostderr: (42.960954926s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (43.53s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-302305 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 cp testdata/cp-test.txt multinode-302305:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 ssh -n multinode-302305 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 cp multinode-302305:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2186927945/001/cp-test_multinode-302305.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 ssh -n multinode-302305 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 cp multinode-302305:/home/docker/cp-test.txt multinode-302305-m02:/home/docker/cp-test_multinode-302305_multinode-302305-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 ssh -n multinode-302305 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 ssh -n multinode-302305-m02 "sudo cat /home/docker/cp-test_multinode-302305_multinode-302305-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 cp multinode-302305:/home/docker/cp-test.txt multinode-302305-m03:/home/docker/cp-test_multinode-302305_multinode-302305-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 ssh -n multinode-302305 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 ssh -n multinode-302305-m03 "sudo cat /home/docker/cp-test_multinode-302305_multinode-302305-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 cp testdata/cp-test.txt multinode-302305-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 ssh -n multinode-302305-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 cp multinode-302305-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2186927945/001/cp-test_multinode-302305-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 ssh -n multinode-302305-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 cp multinode-302305-m02:/home/docker/cp-test.txt multinode-302305:/home/docker/cp-test_multinode-302305-m02_multinode-302305.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 ssh -n multinode-302305-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 ssh -n multinode-302305 "sudo cat /home/docker/cp-test_multinode-302305-m02_multinode-302305.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 cp multinode-302305-m02:/home/docker/cp-test.txt multinode-302305-m03:/home/docker/cp-test_multinode-302305-m02_multinode-302305-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 ssh -n multinode-302305-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 ssh -n multinode-302305-m03 "sudo cat /home/docker/cp-test_multinode-302305-m02_multinode-302305-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 cp testdata/cp-test.txt multinode-302305-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 ssh -n multinode-302305-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 cp multinode-302305-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2186927945/001/cp-test_multinode-302305-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 ssh -n multinode-302305-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 cp multinode-302305-m03:/home/docker/cp-test.txt multinode-302305:/home/docker/cp-test_multinode-302305-m03_multinode-302305.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 ssh -n multinode-302305-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 ssh -n multinode-302305 "sudo cat /home/docker/cp-test_multinode-302305-m03_multinode-302305.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 cp multinode-302305-m03:/home/docker/cp-test.txt multinode-302305-m02:/home/docker/cp-test_multinode-302305-m03_multinode-302305-m02.txt
E0503 22:14:45.606475   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 ssh -n multinode-302305-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 ssh -n multinode-302305-m02 "sudo cat /home/docker/cp-test_multinode-302305-m03_multinode-302305-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.31s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-302305 node stop m03: (1.541787535s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-302305 status: exit status 7 (430.005869ms)

                                                
                                                
-- stdout --
	multinode-302305
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-302305-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-302305-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-302305 status --alsologtostderr: exit status 7 (439.142808ms)

                                                
                                                
-- stdout --
	multinode-302305
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-302305-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-302305-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 22:14:48.128783   38879 out.go:291] Setting OutFile to fd 1 ...
	I0503 22:14:48.128900   38879 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 22:14:48.128912   38879 out.go:304] Setting ErrFile to fd 2...
	I0503 22:14:48.128918   38879 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 22:14:48.129091   38879 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18793-6010/.minikube/bin
	I0503 22:14:48.129273   38879 out.go:298] Setting JSON to false
	I0503 22:14:48.129295   38879 mustload.go:65] Loading cluster: multinode-302305
	I0503 22:14:48.129414   38879 notify.go:220] Checking for updates...
	I0503 22:14:48.129671   38879 config.go:182] Loaded profile config "multinode-302305": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0503 22:14:48.129685   38879 status.go:255] checking status of multinode-302305 ...
	I0503 22:14:48.130040   38879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 22:14:48.130093   38879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 22:14:48.146974   38879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38215
	I0503 22:14:48.147333   38879 main.go:141] libmachine: () Calling .GetVersion
	I0503 22:14:48.147954   38879 main.go:141] libmachine: Using API Version  1
	I0503 22:14:48.147993   38879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 22:14:48.148332   38879 main.go:141] libmachine: () Calling .GetMachineName
	I0503 22:14:48.148513   38879 main.go:141] libmachine: (multinode-302305) Calling .GetState
	I0503 22:14:48.150106   38879 status.go:330] multinode-302305 host status = "Running" (err=<nil>)
	I0503 22:14:48.150135   38879 host.go:66] Checking if "multinode-302305" exists ...
	I0503 22:14:48.150508   38879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 22:14:48.150576   38879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 22:14:48.164877   38879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33283
	I0503 22:14:48.165258   38879 main.go:141] libmachine: () Calling .GetVersion
	I0503 22:14:48.165704   38879 main.go:141] libmachine: Using API Version  1
	I0503 22:14:48.165724   38879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 22:14:48.165988   38879 main.go:141] libmachine: () Calling .GetMachineName
	I0503 22:14:48.166168   38879 main.go:141] libmachine: (multinode-302305) Calling .GetIP
	I0503 22:14:48.168770   38879 main.go:141] libmachine: (multinode-302305) DBG | domain multinode-302305 has defined MAC address 52:54:00:87:01:de in network mk-multinode-302305
	I0503 22:14:48.169134   38879 main.go:141] libmachine: (multinode-302305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:de", ip: ""} in network mk-multinode-302305: {Iface:virbr1 ExpiryTime:2024-05-03 23:12:21 +0000 UTC Type:0 Mac:52:54:00:87:01:de Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-302305 Clientid:01:52:54:00:87:01:de}
	I0503 22:14:48.169162   38879 main.go:141] libmachine: (multinode-302305) DBG | domain multinode-302305 has defined IP address 192.168.39.248 and MAC address 52:54:00:87:01:de in network mk-multinode-302305
	I0503 22:14:48.169323   38879 host.go:66] Checking if "multinode-302305" exists ...
	I0503 22:14:48.169636   38879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 22:14:48.169670   38879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 22:14:48.184153   38879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33301
	I0503 22:14:48.184504   38879 main.go:141] libmachine: () Calling .GetVersion
	I0503 22:14:48.184985   38879 main.go:141] libmachine: Using API Version  1
	I0503 22:14:48.185005   38879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 22:14:48.185294   38879 main.go:141] libmachine: () Calling .GetMachineName
	I0503 22:14:48.185456   38879 main.go:141] libmachine: (multinode-302305) Calling .DriverName
	I0503 22:14:48.185658   38879 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0503 22:14:48.185685   38879 main.go:141] libmachine: (multinode-302305) Calling .GetSSHHostname
	I0503 22:14:48.187790   38879 main.go:141] libmachine: (multinode-302305) DBG | domain multinode-302305 has defined MAC address 52:54:00:87:01:de in network mk-multinode-302305
	I0503 22:14:48.188202   38879 main.go:141] libmachine: (multinode-302305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:de", ip: ""} in network mk-multinode-302305: {Iface:virbr1 ExpiryTime:2024-05-03 23:12:21 +0000 UTC Type:0 Mac:52:54:00:87:01:de Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-302305 Clientid:01:52:54:00:87:01:de}
	I0503 22:14:48.188230   38879 main.go:141] libmachine: (multinode-302305) DBG | domain multinode-302305 has defined IP address 192.168.39.248 and MAC address 52:54:00:87:01:de in network mk-multinode-302305
	I0503 22:14:48.188396   38879 main.go:141] libmachine: (multinode-302305) Calling .GetSSHPort
	I0503 22:14:48.188549   38879 main.go:141] libmachine: (multinode-302305) Calling .GetSSHKeyPath
	I0503 22:14:48.188718   38879 main.go:141] libmachine: (multinode-302305) Calling .GetSSHUsername
	I0503 22:14:48.188848   38879 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18793-6010/.minikube/machines/multinode-302305/id_rsa Username:docker}
	I0503 22:14:48.274979   38879 ssh_runner.go:195] Run: systemctl --version
	I0503 22:14:48.282834   38879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0503 22:14:48.303243   38879 kubeconfig.go:125] found "multinode-302305" server: "https://192.168.39.248:8443"
	I0503 22:14:48.303277   38879 api_server.go:166] Checking apiserver status ...
	I0503 22:14:48.303318   38879 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0503 22:14:48.320065   38879 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1182/cgroup
	W0503 22:14:48.331032   38879 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1182/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0503 22:14:48.331097   38879 ssh_runner.go:195] Run: ls
	I0503 22:14:48.336422   38879 api_server.go:253] Checking apiserver healthz at https://192.168.39.248:8443/healthz ...
	I0503 22:14:48.341191   38879 api_server.go:279] https://192.168.39.248:8443/healthz returned 200:
	ok
	I0503 22:14:48.341211   38879 status.go:422] multinode-302305 apiserver status = Running (err=<nil>)
	I0503 22:14:48.341220   38879 status.go:257] multinode-302305 status: &{Name:multinode-302305 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0503 22:14:48.341235   38879 status.go:255] checking status of multinode-302305-m02 ...
	I0503 22:14:48.341510   38879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 22:14:48.341540   38879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 22:14:48.356405   38879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39701
	I0503 22:14:48.356816   38879 main.go:141] libmachine: () Calling .GetVersion
	I0503 22:14:48.357239   38879 main.go:141] libmachine: Using API Version  1
	I0503 22:14:48.357269   38879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 22:14:48.357552   38879 main.go:141] libmachine: () Calling .GetMachineName
	I0503 22:14:48.357760   38879 main.go:141] libmachine: (multinode-302305-m02) Calling .GetState
	I0503 22:14:48.359190   38879 status.go:330] multinode-302305-m02 host status = "Running" (err=<nil>)
	I0503 22:14:48.359204   38879 host.go:66] Checking if "multinode-302305-m02" exists ...
	I0503 22:14:48.359593   38879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 22:14:48.359639   38879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 22:14:48.375098   38879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46767
	I0503 22:14:48.375476   38879 main.go:141] libmachine: () Calling .GetVersion
	I0503 22:14:48.375923   38879 main.go:141] libmachine: Using API Version  1
	I0503 22:14:48.375946   38879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 22:14:48.376255   38879 main.go:141] libmachine: () Calling .GetMachineName
	I0503 22:14:48.376444   38879 main.go:141] libmachine: (multinode-302305-m02) Calling .GetIP
	I0503 22:14:48.378853   38879 main.go:141] libmachine: (multinode-302305-m02) DBG | domain multinode-302305-m02 has defined MAC address 52:54:00:15:a1:44 in network mk-multinode-302305
	I0503 22:14:48.379212   38879 main.go:141] libmachine: (multinode-302305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a1:44", ip: ""} in network mk-multinode-302305: {Iface:virbr1 ExpiryTime:2024-05-03 23:13:21 +0000 UTC Type:0 Mac:52:54:00:15:a1:44 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:multinode-302305-m02 Clientid:01:52:54:00:15:a1:44}
	I0503 22:14:48.379242   38879 main.go:141] libmachine: (multinode-302305-m02) DBG | domain multinode-302305-m02 has defined IP address 192.168.39.8 and MAC address 52:54:00:15:a1:44 in network mk-multinode-302305
	I0503 22:14:48.379350   38879 host.go:66] Checking if "multinode-302305-m02" exists ...
	I0503 22:14:48.379635   38879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 22:14:48.379693   38879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 22:14:48.395280   38879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37955
	I0503 22:14:48.395678   38879 main.go:141] libmachine: () Calling .GetVersion
	I0503 22:14:48.396076   38879 main.go:141] libmachine: Using API Version  1
	I0503 22:14:48.396098   38879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 22:14:48.396369   38879 main.go:141] libmachine: () Calling .GetMachineName
	I0503 22:14:48.396532   38879 main.go:141] libmachine: (multinode-302305-m02) Calling .DriverName
	I0503 22:14:48.396677   38879 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0503 22:14:48.396694   38879 main.go:141] libmachine: (multinode-302305-m02) Calling .GetSSHHostname
	I0503 22:14:48.399121   38879 main.go:141] libmachine: (multinode-302305-m02) DBG | domain multinode-302305-m02 has defined MAC address 52:54:00:15:a1:44 in network mk-multinode-302305
	I0503 22:14:48.399447   38879 main.go:141] libmachine: (multinode-302305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:a1:44", ip: ""} in network mk-multinode-302305: {Iface:virbr1 ExpiryTime:2024-05-03 23:13:21 +0000 UTC Type:0 Mac:52:54:00:15:a1:44 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:multinode-302305-m02 Clientid:01:52:54:00:15:a1:44}
	I0503 22:14:48.399490   38879 main.go:141] libmachine: (multinode-302305-m02) DBG | domain multinode-302305-m02 has defined IP address 192.168.39.8 and MAC address 52:54:00:15:a1:44 in network mk-multinode-302305
	I0503 22:14:48.399625   38879 main.go:141] libmachine: (multinode-302305-m02) Calling .GetSSHPort
	I0503 22:14:48.399807   38879 main.go:141] libmachine: (multinode-302305-m02) Calling .GetSSHKeyPath
	I0503 22:14:48.399977   38879 main.go:141] libmachine: (multinode-302305-m02) Calling .GetSSHUsername
	I0503 22:14:48.400129   38879 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18793-6010/.minikube/machines/multinode-302305-m02/id_rsa Username:docker}
	I0503 22:14:48.479741   38879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0503 22:14:48.494075   38879 status.go:257] multinode-302305-m02 status: &{Name:multinode-302305-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0503 22:14:48.494127   38879 status.go:255] checking status of multinode-302305-m03 ...
	I0503 22:14:48.494552   38879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 22:14:48.494592   38879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 22:14:48.510802   38879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46229
	I0503 22:14:48.511270   38879 main.go:141] libmachine: () Calling .GetVersion
	I0503 22:14:48.511761   38879 main.go:141] libmachine: Using API Version  1
	I0503 22:14:48.511790   38879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 22:14:48.512095   38879 main.go:141] libmachine: () Calling .GetMachineName
	I0503 22:14:48.512292   38879 main.go:141] libmachine: (multinode-302305-m03) Calling .GetState
	I0503 22:14:48.513762   38879 status.go:330] multinode-302305-m03 host status = "Stopped" (err=<nil>)
	I0503 22:14:48.513774   38879 status.go:343] host is not running, skipping remaining checks
	I0503 22:14:48.513779   38879 status.go:257] multinode-302305-m03 status: &{Name:multinode-302305-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.41s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (24.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-302305 node start m03 -v=7 --alsologtostderr: (24.156360232s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (24.77s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (296.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-302305
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-302305
E0503 22:17:05.372135   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/functional-515062/client.crt: no such file or directory
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-302305: (3m5.394975579s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-302305 --wait=true -v=8 --alsologtostderr
E0503 22:19:45.606752   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/client.crt: no such file or directory
E0503 22:20:08.418076   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/functional-515062/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-302305 --wait=true -v=8 --alsologtostderr: (1m51.12456851s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-302305
--- PASS: TestMultiNode/serial/RestartKeepsNodes (296.63s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-302305 node delete m03: (1.723753226s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (184.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 stop
E0503 22:22:05.370971   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/functional-515062/client.crt: no such file or directory
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-302305 stop: (3m4.018438421s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-302305 status: exit status 7 (92.628304ms)

                                                
                                                
-- stdout --
	multinode-302305
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-302305-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-302305 status --alsologtostderr: exit status 7 (92.443778ms)

                                                
                                                
-- stdout --
	multinode-302305
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-302305-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 22:23:16.345208   41985 out.go:291] Setting OutFile to fd 1 ...
	I0503 22:23:16.345295   41985 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 22:23:16.345303   41985 out.go:304] Setting ErrFile to fd 2...
	I0503 22:23:16.345306   41985 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 22:23:16.345459   41985 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18793-6010/.minikube/bin
	I0503 22:23:16.345599   41985 out.go:298] Setting JSON to false
	I0503 22:23:16.345627   41985 mustload.go:65] Loading cluster: multinode-302305
	I0503 22:23:16.345666   41985 notify.go:220] Checking for updates...
	I0503 22:23:16.345995   41985 config.go:182] Loaded profile config "multinode-302305": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0503 22:23:16.346009   41985 status.go:255] checking status of multinode-302305 ...
	I0503 22:23:16.346455   41985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 22:23:16.346519   41985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 22:23:16.364049   41985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44233
	I0503 22:23:16.364433   41985 main.go:141] libmachine: () Calling .GetVersion
	I0503 22:23:16.364998   41985 main.go:141] libmachine: Using API Version  1
	I0503 22:23:16.365027   41985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 22:23:16.365340   41985 main.go:141] libmachine: () Calling .GetMachineName
	I0503 22:23:16.365528   41985 main.go:141] libmachine: (multinode-302305) Calling .GetState
	I0503 22:23:16.367058   41985 status.go:330] multinode-302305 host status = "Stopped" (err=<nil>)
	I0503 22:23:16.367076   41985 status.go:343] host is not running, skipping remaining checks
	I0503 22:23:16.367083   41985 status.go:257] multinode-302305 status: &{Name:multinode-302305 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0503 22:23:16.367106   41985 status.go:255] checking status of multinode-302305-m02 ...
	I0503 22:23:16.367437   41985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0503 22:23:16.367474   41985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0503 22:23:16.381437   41985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46321
	I0503 22:23:16.381760   41985 main.go:141] libmachine: () Calling .GetVersion
	I0503 22:23:16.382164   41985 main.go:141] libmachine: Using API Version  1
	I0503 22:23:16.382183   41985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0503 22:23:16.382475   41985 main.go:141] libmachine: () Calling .GetMachineName
	I0503 22:23:16.382650   41985 main.go:141] libmachine: (multinode-302305-m02) Calling .GetState
	I0503 22:23:16.384181   41985 status.go:330] multinode-302305-m02 host status = "Stopped" (err=<nil>)
	I0503 22:23:16.384195   41985 status.go:343] host is not running, skipping remaining checks
	I0503 22:23:16.384201   41985 status.go:257] multinode-302305-m02 status: &{Name:multinode-302305-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (184.20s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (76.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-302305 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-302305 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m15.876855669s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-302305 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (76.40s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (48.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-302305
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-302305-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-302305-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (70.920138ms)

                                                
                                                
-- stdout --
	* [multinode-302305-m02] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18793
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18793-6010/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18793-6010/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-302305-m02' is duplicated with machine name 'multinode-302305-m02' in profile 'multinode-302305'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-302305-m03 --driver=kvm2  --container-runtime=containerd
E0503 22:24:45.606885   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-302305-m03 --driver=kvm2  --container-runtime=containerd: (47.628561619s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-302305
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-302305: exit status 80 (222.869094ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-302305 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-302305-m03 already exists in multinode-302305-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-302305-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (48.95s)

                                                
                                    
x
+
TestPreload (451.78s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-181619 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0503 22:27:05.371715   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/functional-515062/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-181619 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (3m1.907559838s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-181619 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-181619 image pull gcr.io/k8s-minikube/busybox: (2.925615168s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-181619
E0503 22:29:28.654379   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/client.crt: no such file or directory
E0503 22:29:45.606422   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-181619: (1m32.438767863s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-181619 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
E0503 22:32:05.371481   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/functional-515062/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-181619 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (2m53.160297162s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-181619 image list
helpers_test.go:175: Cleaning up "test-preload-181619" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-181619
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-181619: (1.097915834s)
--- PASS: TestPreload (451.78s)

                                                
                                    
x
+
TestScheduledStopUnix (116.81s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-914190 --memory=2048 --driver=kvm2  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-914190 --memory=2048 --driver=kvm2  --container-runtime=containerd: (45.04491198s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-914190 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-914190 -n scheduled-stop-914190
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-914190 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-914190 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-914190 -n scheduled-stop-914190
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-914190
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-914190 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0503 22:34:45.606903   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-914190
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-914190: exit status 7 (75.542776ms)

                                                
                                                
-- stdout --
	scheduled-stop-914190
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-914190 -n scheduled-stop-914190
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-914190 -n scheduled-stop-914190: exit status 7 (74.604341ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-914190" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-914190
--- PASS: TestScheduledStopUnix (116.81s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (179.09s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1264029724 start -p running-upgrade-646266 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1264029724 start -p running-upgrade-646266 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m18.056698814s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-646266 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0503 22:42:05.370472   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/functional-515062/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-646266 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m36.651944758s)
helpers_test.go:175: Cleaning up "running-upgrade-646266" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-646266
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-646266: (1.212563613s)
--- PASS: TestRunningBinaryUpgrade (179.09s)

                                                
                                    
x
+
TestKubernetesUpgrade (195.36s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-533508 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-533508 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m15.367950103s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-533508
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-533508: (2.32874882s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-533508 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-533508 status --format={{.Host}}: exit status 7 (75.897963ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-533508 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-533508 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (53.398076388s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-533508 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-533508 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-533508 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (96.536384ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-533508] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18793
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18793-6010/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18793-6010/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-533508
	    minikube start -p kubernetes-upgrade-533508 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5335082 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0, by running:
	    
	    minikube start -p kubernetes-upgrade-533508 --kubernetes-version=v1.30.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-533508 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-533508 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m2.819943975s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-533508" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-533508
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-533508: (1.222252111s)
--- PASS: TestKubernetesUpgrade (195.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-966618 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-966618 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (99.355295ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-966618] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18793
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18793-6010/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18793-6010/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (128.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-966618 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-966618 --driver=kvm2  --container-runtime=containerd: (2m8.440281952s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-966618 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (128.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-002994 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-002994 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (109.893183ms)

                                                
                                                
-- stdout --
	* [false-002994] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18793
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18793-6010/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18793-6010/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 22:36:33.727365   48356 out.go:291] Setting OutFile to fd 1 ...
	I0503 22:36:33.727629   48356 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 22:36:33.727639   48356 out.go:304] Setting ErrFile to fd 2...
	I0503 22:36:33.727645   48356 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 22:36:33.727850   48356 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18793-6010/.minikube/bin
	I0503 22:36:33.728429   48356 out.go:298] Setting JSON to false
	I0503 22:36:33.729349   48356 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":4735,"bootTime":1714771059,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0503 22:36:33.729406   48356 start.go:139] virtualization: kvm guest
	I0503 22:36:33.731644   48356 out.go:177] * [false-002994] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0503 22:36:33.733229   48356 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 22:36:33.734557   48356 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 22:36:33.733300   48356 notify.go:220] Checking for updates...
	I0503 22:36:33.737400   48356 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18793-6010/kubeconfig
	I0503 22:36:33.738778   48356 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18793-6010/.minikube
	I0503 22:36:33.740158   48356 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0503 22:36:33.741434   48356 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 22:36:33.743245   48356 config.go:182] Loaded profile config "NoKubernetes-966618": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0503 22:36:33.743346   48356 config.go:182] Loaded profile config "cert-expiration-996621": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0503 22:36:33.743472   48356 config.go:182] Loaded profile config "cert-options-747044": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0503 22:36:33.743592   48356 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 22:36:33.778551   48356 out.go:177] * Using the kvm2 driver based on user configuration
	I0503 22:36:33.779836   48356 start.go:297] selected driver: kvm2
	I0503 22:36:33.779851   48356 start.go:901] validating driver "kvm2" against <nil>
	I0503 22:36:33.779861   48356 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 22:36:33.781691   48356 out.go:177] 
	W0503 22:36:33.783098   48356 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0503 22:36:33.784523   48356 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-002994 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-002994

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-002994

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-002994

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-002994

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-002994

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-002994

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-002994

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-002994

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-002994

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-002994

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-002994"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-002994"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-002994"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-002994

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-002994"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-002994"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-002994" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-002994" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-002994" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-002994" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-002994" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-002994" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-002994" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-002994" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-002994"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-002994"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-002994"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-002994"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-002994"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-002994" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-002994" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-002994" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-002994"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-002994"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-002994"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-002994"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-002994"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18793-6010/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 03 May 2024 22:35:35 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0
name: cluster_info
server: https://192.168.39.57:8443
name: cert-expiration-996621
contexts:
- context:
cluster: cert-expiration-996621
extensions:
- extension:
last-update: Fri, 03 May 2024 22:35:35 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0
name: context_info
namespace: default
user: cert-expiration-996621
name: cert-expiration-996621
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-996621
user:
client-certificate: /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/cert-expiration-996621/client.crt
client-key: /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/cert-expiration-996621/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-002994

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-002994"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-002994"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-002994"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-002994"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-002994"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-002994"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-002994"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-002994"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-002994"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-002994"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-002994"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-002994"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-002994"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-002994"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-002994"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-002994"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-002994"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-002994"

                                                
                                                
----------------------- debugLogs end: false-002994 [took: 2.915760535s] --------------------------------
helpers_test.go:175: Cleaning up "false-002994" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-002994
--- PASS: TestNetworkPlugins/group/false (3.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (51.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-966618 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0503 22:37:05.370356   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/functional-515062/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-966618 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (50.475581604s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-966618 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-966618 status -o json: exit status 2 (265.519416ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-966618","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-966618
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (51.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (5.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (5.93s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (279.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.416160925 start -p stopped-upgrade-484504 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.416160925 start -p stopped-upgrade-484504 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m11.051166951s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.416160925 -p stopped-upgrade-484504 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.416160925 -p stopped-upgrade-484504 stop: (1.479194181s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-484504 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0503 22:39:45.606458   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-484504 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (2m27.253256043s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (279.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (30.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-966618 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-966618 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (30.7128081s)
--- PASS: TestNoKubernetes/serial/Start (30.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-966618 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-966618 "sudo systemctl is-active --quiet service kubelet": exit status 1 (212.890291ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (28.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (14.502386241s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (14.420193014s)
--- PASS: TestNoKubernetes/serial/ProfileList (28.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-966618
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-966618: (1.630946393s)
--- PASS: TestNoKubernetes/serial/Stop (1.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (44.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-966618 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-966618 --driver=kvm2  --container-runtime=containerd: (44.47153664s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (44.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-966618 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-966618 "sudo systemctl is-active --quiet service kubelet": exit status 1 (247.300413ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.25s)

                                                
                                    
x
+
TestPause/serial/Start (142.14s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-642972 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-642972 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (2m22.13924803s)
--- PASS: TestPause/serial/Start (142.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (150.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-002994 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-002994 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (2m30.235786557s)
--- PASS: TestNetworkPlugins/group/auto/Start (150.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.44s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-484504
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-484504: (1.443345108s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (68.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-002994 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-002994 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m8.388065193s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (68.39s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (56.58s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-642972 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-642972 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (56.555037688s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (56.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (100.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-002994 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-002994 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m40.132544397s)
--- PASS: TestNetworkPlugins/group/calico/Start (100.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-002994 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-002994 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-27x7q" [1d3e76c9-ab10-40a2-bbd7-fecc68e81861] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-27x7q" [1d3e76c9-ab10-40a2-bbd7-fecc68e81861] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.005227802s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-002994 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-002994 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-002994 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-57mzl" [e4bbebc8-8146-481d-9e68-c2f3dd5f6180] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.009816667s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (89.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-002994 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-002994 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m29.169059858s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (89.17s)

                                                
                                    
x
+
TestPause/serial/Pause (0.87s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-642972 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.87s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.29s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-642972 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-642972 --output=json --layout=cluster: exit status 2 (293.439746ms)

                                                
                                                
-- stdout --
	{"Name":"pause-642972","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-642972","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.29s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.82s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-642972 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.82s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.07s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-642972 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-642972 --alsologtostderr -v=5: (1.065876264s)
--- PASS: TestPause/serial/PauseAgain (1.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-002994 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-002994 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-snwzw" [36deee0c-a709-4217-a3bb-0ab200bca16f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-snwzw" [36deee0c-a709-4217-a3bb-0ab200bca16f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.005180063s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.32s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.1s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-642972 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-642972 --alsologtostderr -v=5: (1.100446594s)
--- PASS: TestPause/serial/DeletePaused (1.10s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.52s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (87.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-002994 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-002994 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m27.076679662s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (87.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-002994 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-002994 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-002994 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (117.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-002994 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-002994 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m57.839179418s)
--- PASS: TestNetworkPlugins/group/flannel/Start (117.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-ghmbk" [4067364a-8eb0-435a-bd0a-c133969209be] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.083848148s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-002994 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-002994 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-vtkrl" [95067f79-03ca-4fd4-a716-d4e2e3db9b59] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-vtkrl" [95067f79-03ca-4fd4-a716-d4e2e3db9b59] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.00530151s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-002994 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-002994 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-002994 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-002994 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-002994 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-444f9" [fbf47032-7c7b-46c1-8f69-49df907c918e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-444f9" [fbf47032-7c7b-46c1-8f69-49df907c918e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004065137s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (108.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-002994 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-002994 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m48.254053605s)
--- PASS: TestNetworkPlugins/group/bridge/Start (108.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-002994 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-002994 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-pv8cf" [686575bd-3883-4533-aa31-b2f4e417ef31] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-pv8cf" [686575bd-3883-4533-aa31-b2f4e417ef31] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.005899671s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-002994 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-002994 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-002994 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-002994 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-002994 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-002994 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (182.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-982564 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-982564 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (3m2.409829523s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (182.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (141.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-328804 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-328804 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0: (2m21.868984865s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (141.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-66w4q" [f4592427-95ce-480e-bf7b-a4b62e7af848] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005220905s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-002994 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-002994 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-6vgsj" [f90b0cae-11fe-4ddd-a4e8-52526b9cfb67] Pending
helpers_test.go:344: "netcat-6bc787d567-6vgsj" [f90b0cae-11fe-4ddd-a4e8-52526b9cfb67] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-6vgsj" [f90b0cae-11fe-4ddd-a4e8-52526b9cfb67] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004175712s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-002994 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-002994 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-002994 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (77.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-508309 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-508309 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0: (1m17.064358894s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (77.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-002994 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-002994 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-7kcph" [4a83668c-816a-4bdf-af54-5d6351ae611a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-7kcph" [4a83668c-816a-4bdf-af54-5d6351ae611a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.005753278s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-002994 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-002994 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-002994 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (104.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-546334 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-546334 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0: (1m44.069532348s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (104.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-508309 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [25451ee6-8482-4162-a7df-a4801178fd09] Pending
helpers_test.go:344: "busybox" [25451ee6-8482-4162-a7df-a4801178fd09] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [25451ee6-8482-4162-a7df-a4801178fd09] Running
E0503 22:47:54.355729   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/auto-002994/client.crt: no such file or directory
E0503 22:47:54.361050   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/auto-002994/client.crt: no such file or directory
E0503 22:47:54.371331   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/auto-002994/client.crt: no such file or directory
E0503 22:47:54.391624   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/auto-002994/client.crt: no such file or directory
E0503 22:47:54.431929   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/auto-002994/client.crt: no such file or directory
E0503 22:47:54.512296   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/auto-002994/client.crt: no such file or directory
E0503 22:47:54.672719   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/auto-002994/client.crt: no such file or directory
E0503 22:47:54.993214   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/auto-002994/client.crt: no such file or directory
E0503 22:47:55.634047   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/auto-002994/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 12.005637729s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-508309 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (12.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-328804 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [da135d6e-2d7d-4e77-b3e7-9e2e65a3b772] Pending
helpers_test.go:344: "busybox" [da135d6e-2d7d-4e77-b3e7-9e2e65a3b772] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [da135d6e-2d7d-4e77-b3e7-9e2e65a3b772] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.006452163s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-328804 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-328804 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-328804 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.151463556s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-328804 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-508309 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0503 22:47:56.914376   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/auto-002994/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-508309 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.124373947s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-508309 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (92.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-328804 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-328804 --alsologtostderr -v=3: (1m32.568899112s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (92.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (92.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-508309 --alsologtostderr -v=3
E0503 22:47:59.475054   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/auto-002994/client.crt: no such file or directory
E0503 22:48:04.595651   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/auto-002994/client.crt: no such file or directory
E0503 22:48:14.835831   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/auto-002994/client.crt: no such file or directory
E0503 22:48:21.184750   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/kindnet-002994/client.crt: no such file or directory
E0503 22:48:21.189998   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/kindnet-002994/client.crt: no such file or directory
E0503 22:48:21.200230   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/kindnet-002994/client.crt: no such file or directory
E0503 22:48:21.220458   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/kindnet-002994/client.crt: no such file or directory
E0503 22:48:21.260799   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/kindnet-002994/client.crt: no such file or directory
E0503 22:48:21.341182   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/kindnet-002994/client.crt: no such file or directory
E0503 22:48:21.501625   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/kindnet-002994/client.crt: no such file or directory
E0503 22:48:21.821888   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/kindnet-002994/client.crt: no such file or directory
E0503 22:48:22.462031   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/kindnet-002994/client.crt: no such file or directory
E0503 22:48:23.742635   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/kindnet-002994/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-508309 --alsologtostderr -v=3: (1m32.511244264s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (92.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (12.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-982564 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f285b706-4670-414c-83bc-b79922f0a5fe] Pending
helpers_test.go:344: "busybox" [f285b706-4670-414c-83bc-b79922f0a5fe] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0503 22:48:26.303188   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/kindnet-002994/client.crt: no such file or directory
helpers_test.go:344: "busybox" [f285b706-4670-414c-83bc-b79922f0a5fe] Running
E0503 22:48:31.423983   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/kindnet-002994/client.crt: no such file or directory
E0503 22:48:35.316967   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/auto-002994/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 12.004515911s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-982564 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (12.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-982564 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-982564 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (92.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-982564 --alsologtostderr -v=3
E0503 22:48:41.664843   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/kindnet-002994/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-982564 --alsologtostderr -v=3: (1m32.460084252s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (92.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-546334 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [89d229ff-13b6-47af-97d0-4d0fe35d8820] Pending
helpers_test.go:344: "busybox" [89d229ff-13b6-47af-97d0-4d0fe35d8820] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [89d229ff-13b6-47af-97d0-4d0fe35d8820] Running
E0503 22:49:02.145986   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/kindnet-002994/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004250007s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-546334 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-546334 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-546334 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-546334 --alsologtostderr -v=3
E0503 22:49:16.277587   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/auto-002994/client.crt: no such file or directory
E0503 22:49:18.859590   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/calico-002994/client.crt: no such file or directory
E0503 22:49:18.864885   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/calico-002994/client.crt: no such file or directory
E0503 22:49:18.875196   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/calico-002994/client.crt: no such file or directory
E0503 22:49:18.895444   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/calico-002994/client.crt: no such file or directory
E0503 22:49:18.936516   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/calico-002994/client.crt: no such file or directory
E0503 22:49:19.016845   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/calico-002994/client.crt: no such file or directory
E0503 22:49:19.176965   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/calico-002994/client.crt: no such file or directory
E0503 22:49:19.497328   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/calico-002994/client.crt: no such file or directory
E0503 22:49:20.137847   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/calico-002994/client.crt: no such file or directory
E0503 22:49:21.418813   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/calico-002994/client.crt: no such file or directory
E0503 22:49:23.979204   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/calico-002994/client.crt: no such file or directory
E0503 22:49:29.100320   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/calico-002994/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-546334 --alsologtostderr -v=3: (1m31.90501507s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-328804 -n no-preload-328804
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-328804 -n no-preload-328804: exit status 7 (76.668301ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-328804 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-508309 -n embed-certs-508309
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-508309 -n embed-certs-508309: exit status 7 (76.240248ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-508309 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (321.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-328804 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-328804 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0: (5m21.418488043s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-328804 -n no-preload-328804
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (321.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (343.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-508309 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0
E0503 22:49:39.340615   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/calico-002994/client.crt: no such file or directory
E0503 22:49:43.106718   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/kindnet-002994/client.crt: no such file or directory
E0503 22:49:45.606389   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/client.crt: no such file or directory
E0503 22:49:53.437841   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/custom-flannel-002994/client.crt: no such file or directory
E0503 22:49:53.443422   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/custom-flannel-002994/client.crt: no such file or directory
E0503 22:49:53.453953   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/custom-flannel-002994/client.crt: no such file or directory
E0503 22:49:53.474380   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/custom-flannel-002994/client.crt: no such file or directory
E0503 22:49:53.515147   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/custom-flannel-002994/client.crt: no such file or directory
E0503 22:49:53.595444   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/custom-flannel-002994/client.crt: no such file or directory
E0503 22:49:53.756185   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/custom-flannel-002994/client.crt: no such file or directory
E0503 22:49:54.076898   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/custom-flannel-002994/client.crt: no such file or directory
E0503 22:49:54.717035   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/custom-flannel-002994/client.crt: no such file or directory
E0503 22:49:55.997566   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/custom-flannel-002994/client.crt: no such file or directory
E0503 22:49:57.548753   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/enable-default-cni-002994/client.crt: no such file or directory
E0503 22:49:57.554055   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/enable-default-cni-002994/client.crt: no such file or directory
E0503 22:49:57.564344   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/enable-default-cni-002994/client.crt: no such file or directory
E0503 22:49:57.584641   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/enable-default-cni-002994/client.crt: no such file or directory
E0503 22:49:57.624958   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/enable-default-cni-002994/client.crt: no such file or directory
E0503 22:49:57.705269   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/enable-default-cni-002994/client.crt: no such file or directory
E0503 22:49:57.865720   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/enable-default-cni-002994/client.crt: no such file or directory
E0503 22:49:58.186368   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/enable-default-cni-002994/client.crt: no such file or directory
E0503 22:49:58.558312   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/custom-flannel-002994/client.crt: no such file or directory
E0503 22:49:58.827031   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/enable-default-cni-002994/client.crt: no such file or directory
E0503 22:49:59.821314   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/calico-002994/client.crt: no such file or directory
E0503 22:50:00.107718   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/enable-default-cni-002994/client.crt: no such file or directory
E0503 22:50:02.668508   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/enable-default-cni-002994/client.crt: no such file or directory
E0503 22:50:03.679114   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/custom-flannel-002994/client.crt: no such file or directory
E0503 22:50:07.788707   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/enable-default-cni-002994/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-508309 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0: (5m43.063775147s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-508309 -n embed-certs-508309
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (343.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-982564 -n old-k8s-version-982564
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-982564 -n old-k8s-version-982564: exit status 7 (77.360069ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-982564 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (195.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-982564 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
E0503 22:50:13.919751   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/custom-flannel-002994/client.crt: no such file or directory
E0503 22:50:18.029843   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/enable-default-cni-002994/client.crt: no such file or directory
E0503 22:50:34.400528   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/custom-flannel-002994/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-982564 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (3m14.912571135s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-982564 -n old-k8s-version-982564
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (195.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-546334 -n default-k8s-diff-port-546334
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-546334 -n default-k8s-diff-port-546334: exit status 7 (103.133727ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-546334 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (317.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-546334 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0
E0503 22:50:38.198779   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/auto-002994/client.crt: no such file or directory
E0503 22:50:38.521586   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/enable-default-cni-002994/client.crt: no such file or directory
E0503 22:50:40.781624   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/calico-002994/client.crt: no such file or directory
E0503 22:50:52.440606   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/flannel-002994/client.crt: no such file or directory
E0503 22:50:52.445902   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/flannel-002994/client.crt: no such file or directory
E0503 22:50:52.456167   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/flannel-002994/client.crt: no such file or directory
E0503 22:50:52.476431   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/flannel-002994/client.crt: no such file or directory
E0503 22:50:52.516723   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/flannel-002994/client.crt: no such file or directory
E0503 22:50:52.597068   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/flannel-002994/client.crt: no such file or directory
E0503 22:50:52.757469   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/flannel-002994/client.crt: no such file or directory
E0503 22:50:53.077786   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/flannel-002994/client.crt: no such file or directory
E0503 22:50:53.718018   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/flannel-002994/client.crt: no such file or directory
E0503 22:50:54.998663   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/flannel-002994/client.crt: no such file or directory
E0503 22:50:57.559770   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/flannel-002994/client.crt: no such file or directory
E0503 22:51:02.680114   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/flannel-002994/client.crt: no such file or directory
E0503 22:51:05.027559   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/kindnet-002994/client.crt: no such file or directory
E0503 22:51:12.921172   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/flannel-002994/client.crt: no such file or directory
E0503 22:51:15.360846   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/custom-flannel-002994/client.crt: no such file or directory
E0503 22:51:19.482619   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/enable-default-cni-002994/client.crt: no such file or directory
E0503 22:51:33.402080   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/flannel-002994/client.crt: no such file or directory
E0503 22:51:41.562952   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/bridge-002994/client.crt: no such file or directory
E0503 22:51:41.568251   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/bridge-002994/client.crt: no such file or directory
E0503 22:51:41.578559   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/bridge-002994/client.crt: no such file or directory
E0503 22:51:41.598894   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/bridge-002994/client.crt: no such file or directory
E0503 22:51:41.639220   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/bridge-002994/client.crt: no such file or directory
E0503 22:51:41.719535   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/bridge-002994/client.crt: no such file or directory
E0503 22:51:41.879775   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/bridge-002994/client.crt: no such file or directory
E0503 22:51:42.200395   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/bridge-002994/client.crt: no such file or directory
E0503 22:51:42.841393   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/bridge-002994/client.crt: no such file or directory
E0503 22:51:44.122521   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/bridge-002994/client.crt: no such file or directory
E0503 22:51:46.682721   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/bridge-002994/client.crt: no such file or directory
E0503 22:51:51.803455   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/bridge-002994/client.crt: no such file or directory
E0503 22:52:02.044486   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/bridge-002994/client.crt: no such file or directory
E0503 22:52:02.702424   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/calico-002994/client.crt: no such file or directory
E0503 22:52:05.370602   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/functional-515062/client.crt: no such file or directory
E0503 22:52:14.362411   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/flannel-002994/client.crt: no such file or directory
E0503 22:52:22.525531   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/bridge-002994/client.crt: no such file or directory
E0503 22:52:37.281967   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/custom-flannel-002994/client.crt: no such file or directory
E0503 22:52:41.403166   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/enable-default-cni-002994/client.crt: no such file or directory
E0503 22:52:54.354830   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/auto-002994/client.crt: no such file or directory
E0503 22:53:03.486556   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/bridge-002994/client.crt: no such file or directory
E0503 22:53:21.184828   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/kindnet-002994/client.crt: no such file or directory
E0503 22:53:22.039084   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/auto-002994/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-546334 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0: (5m17.209548674s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-546334 -n default-k8s-diff-port-546334
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (317.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-xfn7r" [b3e9b9bc-82ee-43d7-beeb-a490ffa4a7a7] Running
E0503 22:53:28.419568   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/functional-515062/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004547348s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-xfn7r" [b3e9b9bc-82ee-43d7-beeb-a490ffa4a7a7] Running
E0503 22:53:36.282682   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/flannel-002994/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005433822s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-982564 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-982564 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-982564 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-982564 -n old-k8s-version-982564
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-982564 -n old-k8s-version-982564: exit status 2 (265.149349ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-982564 -n old-k8s-version-982564
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-982564 -n old-k8s-version-982564: exit status 2 (263.440493ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-982564 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-982564 -n old-k8s-version-982564
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-982564 -n old-k8s-version-982564
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (61.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-440522 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0
E0503 22:53:48.867747   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/kindnet-002994/client.crt: no such file or directory
E0503 22:54:18.859477   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/calico-002994/client.crt: no such file or directory
E0503 22:54:25.406903   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/bridge-002994/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-440522 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0: (1m1.290839161s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (61.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-440522 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-440522 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.201284906s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-440522 --alsologtostderr -v=3
E0503 22:54:45.606742   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/addons-146858/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-440522 --alsologtostderr -v=3: (2.562582743s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-440522 -n newest-cni-440522
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-440522 -n newest-cni-440522: exit status 7 (79.049872ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-440522 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-440522 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0
E0503 22:54:46.542930   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/calico-002994/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-440522 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0: (36.597936261s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-440522 -n newest-cni-440522
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-6ckjf" [21510fd9-e0a4-4bb2-a480-e5ce1ede0c7f] Running
E0503 22:54:53.438079   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/custom-flannel-002994/client.crt: no such file or directory
E0503 22:54:57.548825   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/enable-default-cni-002994/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005935984s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-6ckjf" [21510fd9-e0a4-4bb2-a480-e5ce1ede0c7f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005448921s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-328804 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-328804 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-328804 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-328804 -n no-preload-328804
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-328804 -n no-preload-328804: exit status 2 (264.88959ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-328804 -n no-preload-328804
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-328804 -n no-preload-328804: exit status 2 (273.207169ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-328804 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-328804 -n no-preload-328804
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-328804 -n no-preload-328804
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (18.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-7spjj" [ede4412d-b946-4836-be49-8a3be65289b7] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0503 22:55:21.122193   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/custom-flannel-002994/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-779776cb65-7spjj" [ede4412d-b946-4836-be49-8a3be65289b7] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 18.003928647s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (18.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-440522 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-440522 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-440522 --alsologtostderr -v=1: (1.009697761s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-440522 -n newest-cni-440522
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-440522 -n newest-cni-440522: exit status 2 (294.655887ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-440522 -n newest-cni-440522
E0503 22:55:25.244057   13378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/enable-default-cni-002994/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-440522 -n newest-cni-440522: exit status 2 (291.572742ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-440522 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-440522 -n newest-cni-440522
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-440522 -n newest-cni-440522
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-7spjj" [ede4412d-b946-4836-be49-8a3be65289b7] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006693813s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-508309 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-508309 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-508309 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-508309 -n embed-certs-508309
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-508309 -n embed-certs-508309: exit status 2 (256.763582ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-508309 -n embed-certs-508309
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-508309 -n embed-certs-508309: exit status 2 (251.353531ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-508309 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-508309 -n embed-certs-508309
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-508309 -n embed-certs-508309
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-zfhql" [d02be528-7ff3-4b36-9d4d-b1b858e6bd4f] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-779776cb65-zfhql" [d02be528-7ff3-4b36-9d4d-b1b858e6bd4f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.005002877s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-zfhql" [d02be528-7ff3-4b36-9d4d-b1b858e6bd4f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004528569s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-546334 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-546334 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-546334 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-546334 -n default-k8s-diff-port-546334
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-546334 -n default-k8s-diff-port-546334: exit status 2 (248.828481ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-546334 -n default-k8s-diff-port-546334
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-546334 -n default-k8s-diff-port-546334: exit status 2 (248.415062ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-546334 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-546334 -n default-k8s-diff-port-546334
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-546334 -n default-k8s-diff-port-546334
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.69s)

                                                
                                    

Test skip (36/325)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.0/cached-images 0
15 TestDownloadOnly/v1.30.0/binaries 0
16 TestDownloadOnly/v1.30.0/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Olm 0
47 TestDockerFlags 0
50 TestDockerEnvContainerd 0
52 TestHyperKitDriverInstallOrUpdate 0
53 TestHyperkitDriverSkipUpgrade 0
104 TestFunctional/parallel/DockerEnv 0
105 TestFunctional/parallel/PodmanEnv 0
118 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
119 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
120 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
124 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
153 TestGvisorAddon 0
175 TestImageBuild 0
202 TestKicCustomNetwork 0
203 TestKicExistingNetwork 0
204 TestKicCustomSubnet 0
205 TestKicStaticIP 0
237 TestChangeNoneUser 0
240 TestScheduledStopWindows 0
242 TestSkaffold 0
244 TestInsufficientStorage 0
248 TestMissingContainerUpgrade 0
254 TestNetworkPlugins/group/kubenet 3
262 TestNetworkPlugins/group/cilium 4.53
277 TestStartStop/group/disable-driver-mounts 0.15
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-002994 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-002994

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-002994

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-002994

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-002994

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-002994

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-002994

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-002994

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-002994

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-002994

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-002994

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-002994"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-002994"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-002994"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-002994

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-002994"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-002994"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-002994" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-002994" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-002994" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-002994" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-002994" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-002994" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-002994" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-002994" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-002994"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-002994"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-002994"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-002994"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-002994"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-002994" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-002994" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-002994" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-002994"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-002994"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-002994"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-002994"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-002994"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18793-6010/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 03 May 2024 22:35:35 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0
name: cluster_info
server: https://192.168.39.57:8443
name: cert-expiration-996621
contexts:
- context:
cluster: cert-expiration-996621
extensions:
- extension:
last-update: Fri, 03 May 2024 22:35:35 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0
name: context_info
namespace: default
user: cert-expiration-996621
name: cert-expiration-996621
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-996621
user:
client-certificate: /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/cert-expiration-996621/client.crt
client-key: /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/cert-expiration-996621/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-002994

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-002994"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-002994"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-002994"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-002994"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-002994"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-002994"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-002994"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-002994"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-002994"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-002994"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-002994"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-002994"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-002994"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-002994"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-002994"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-002994"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-002994"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-002994"

                                                
                                                
----------------------- debugLogs end: kubenet-002994 [took: 2.860752256s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-002994" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-002994
--- SKIP: TestNetworkPlugins/group/kubenet (3.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-002994 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-002994

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-002994

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-002994

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-002994

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-002994

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-002994

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-002994

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-002994

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-002994

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-002994

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-002994"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-002994"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-002994"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-002994

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-002994"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-002994"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-002994" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-002994" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-002994" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-002994" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-002994" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-002994" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-002994" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-002994" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-002994"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-002994"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-002994"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-002994"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-002994"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-002994

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-002994

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-002994" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-002994" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-002994

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-002994

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-002994" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-002994" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-002994" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-002994" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-002994" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-002994"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-002994"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-002994"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-002994"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-002994"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18793-6010/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 03 May 2024 22:35:35 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0
name: cluster_info
server: https://192.168.39.57:8443
name: cert-expiration-996621
contexts:
- context:
cluster: cert-expiration-996621
extensions:
- extension:
last-update: Fri, 03 May 2024 22:35:35 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0
name: context_info
namespace: default
user: cert-expiration-996621
name: cert-expiration-996621
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-996621
user:
client-certificate: /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/cert-expiration-996621/client.crt
client-key: /home/jenkins/minikube-integration/18793-6010/.minikube/profiles/cert-expiration-996621/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-002994

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-002994"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-002994"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-002994"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-002994"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-002994"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-002994"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-002994"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-002994"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-002994"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-002994"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-002994"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-002994"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-002994"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-002994"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-002994"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-002994"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-002994"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-002994" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-002994"

                                                
                                                
----------------------- debugLogs end: cilium-002994 [took: 4.348032432s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-002994" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-002994
--- SKIP: TestNetworkPlugins/group/cilium (4.53s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-729145" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-729145
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard