Test Report: KVM_Linux_containerd 18773

                    
                      30a9d8153d68792af1ccb4545db3a1a834f0d1ba:2024-04-29:34253
                    
                

Test fail (1/325)

Order failed test Duration
33 TestAddons/parallel/HelmTiller 18.73
x
+
TestAddons/parallel/HelmTiller (18.73s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 2.587357ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-xzxtg" [5327d74d-9125-4e88-afe1-b0720c1dcce0] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.006531902s
addons_test.go:473: (dbg) Run:  kubectl --context addons-399337 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-399337 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (10.32674492s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-399337 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:490: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-399337 addons disable helm-tiller --alsologtostderr -v=1: exit status 11 (386.209934ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 11:55:40.518118  861457 out.go:291] Setting OutFile to fd 1 ...
	I0429 11:55:40.518247  861457 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:55:40.518272  861457 out.go:304] Setting ErrFile to fd 2...
	I0429 11:55:40.518277  861457 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:55:40.518474  861457 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-852552/.minikube/bin
	I0429 11:55:40.518770  861457 mustload.go:65] Loading cluster: addons-399337
	I0429 11:55:40.519127  861457 config.go:182] Loaded profile config "addons-399337": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0429 11:55:40.519166  861457 addons.go:597] checking whether the cluster is paused
	I0429 11:55:40.519268  861457 config.go:182] Loaded profile config "addons-399337": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0429 11:55:40.519282  861457 host.go:66] Checking if "addons-399337" exists ...
	I0429 11:55:40.519651  861457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 11:55:40.519694  861457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:55:40.534751  861457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39587
	I0429 11:55:40.535337  861457 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:55:40.535961  861457 main.go:141] libmachine: Using API Version  1
	I0429 11:55:40.535989  861457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:55:40.536366  861457 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:55:40.536596  861457 main.go:141] libmachine: (addons-399337) Calling .GetState
	I0429 11:55:40.538624  861457 main.go:141] libmachine: (addons-399337) Calling .DriverName
	I0429 11:55:40.538884  861457 ssh_runner.go:195] Run: systemctl --version
	I0429 11:55:40.538912  861457 main.go:141] libmachine: (addons-399337) Calling .GetSSHHostname
	I0429 11:55:40.541516  861457 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:55:40.541952  861457 main.go:141] libmachine: (addons-399337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:e6", ip: ""} in network mk-addons-399337: {Iface:virbr1 ExpiryTime:2024-04-29 12:53:14 +0000 UTC Type:0 Mac:52:54:00:eb:57:e6 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:addons-399337 Clientid:01:52:54:00:eb:57:e6}
	I0429 11:55:40.541985  861457 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined IP address 192.168.39.246 and MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:55:40.542158  861457 main.go:141] libmachine: (addons-399337) Calling .GetSSHPort
	I0429 11:55:40.542334  861457 main.go:141] libmachine: (addons-399337) Calling .GetSSHKeyPath
	I0429 11:55:40.542505  861457 main.go:141] libmachine: (addons-399337) Calling .GetSSHUsername
	I0429 11:55:40.542628  861457 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-852552/.minikube/machines/addons-399337/id_rsa Username:docker}
	I0429 11:55:40.630609  861457 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0429 11:55:40.630723  861457 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 11:55:40.688061  861457 cri.go:89] found id: "0aee649614977c51b83091235ff0d593fd444e6630a793291de312c4817dfdb2"
	I0429 11:55:40.688088  861457 cri.go:89] found id: "261306b51937cfaf1cd8cb45a719e1d872b4d844aeae4ab31e0faaba03e53223"
	I0429 11:55:40.688092  861457 cri.go:89] found id: "70d58dc45162e6cbdfdc162cf2d24b7f09b0ca6f7349fad35b641d2df0057a5f"
	I0429 11:55:40.688095  861457 cri.go:89] found id: "79aeff8b8ac1ffcfaedd378dbd357cb41063755c0aeade3235c2cc201b822354"
	I0429 11:55:40.688098  861457 cri.go:89] found id: "dfb8e082efea98474bb2f2b7897e8aa9d98c86e438f54ba35f556654bd513f11"
	I0429 11:55:40.688105  861457 cri.go:89] found id: "55b01d3b035f6418083c93207271709ffd786e87f6cb87132fd5a2b49a7577c3"
	I0429 11:55:40.688113  861457 cri.go:89] found id: "399d1db4439fae9d1980633c04e02e56d2d0cfe66b2087224e0f7843168ffd93"
	I0429 11:55:40.688121  861457 cri.go:89] found id: "0544e2787166e5b8e5bad681b9a95cbd49f2359b5eb470600c44c860358188e7"
	I0429 11:55:40.688125  861457 cri.go:89] found id: "7c8df156bca67507ac8110620f424498bfb65940f719249c8812192f98d26313"
	I0429 11:55:40.688132  861457 cri.go:89] found id: "278f7864cc338b264008b4ca61795cbbafcb6ce23189010edb132002fbc7de89"
	I0429 11:55:40.688137  861457 cri.go:89] found id: "6a63641f0fa77b19a0b79bd89c120e7a29af9f73e9f2455a71b345b9bd9cbe54"
	I0429 11:55:40.688141  861457 cri.go:89] found id: "5c93bcf869433492d9e3a4c11648684f8185cca68dc0ac3c56465b219ac3d5f0"
	I0429 11:55:40.688145  861457 cri.go:89] found id: "1e8ab725b0e03a9b8e3cc66aeb2a8aa276a2c1b27ae58a2c065909beeab736ec"
	I0429 11:55:40.688149  861457 cri.go:89] found id: "10965d8d98b5a0d3fa6870e17438c9e9609a80aa3516a69de53a8474c990d765"
	I0429 11:55:40.688161  861457 cri.go:89] found id: "24100a2c2b625b568bdfc4c93d95c4e5e6daf2dd0ad752fd34fc987fe64cf485"
	I0429 11:55:40.688164  861457 cri.go:89] found id: "533d0f01aa13dbde82e977952f8673fdd0022f882042f09ffe0d32dc3d98cdd4"
	I0429 11:55:40.688166  861457 cri.go:89] found id: "19bb2e708280c24f15f94ee7ba71fa0c5db7d6877a374ca9af8af1fb5fcc3fec"
	I0429 11:55:40.688170  861457 cri.go:89] found id: "b0a4cb7d4581ed2f910b275b6aa366ccb1475a5abe93015fed012fa8f868e276"
	I0429 11:55:40.688172  861457 cri.go:89] found id: "a30da3d9ae2cc2aaa6041219d17672d1b28325c04d1dbd194edd3aa6e655356e"
	I0429 11:55:40.688175  861457 cri.go:89] found id: "74172e40027ff916e75d38241306657c80f53497039129a00474c4faaa8ef589"
	I0429 11:55:40.688178  861457 cri.go:89] found id: "fa926b2753efdb4e92ce428d676dabe157bc211889e7a46bbf11094969e2bc68"
	I0429 11:55:40.688180  861457 cri.go:89] found id: "fc24128a4260e66ad1e6b5c33e8992dc50ea7990572fdc10cf05d7eafcaae5c1"
	I0429 11:55:40.688183  861457 cri.go:89] found id: "4d3f9d1e80187faf4a968059a2b455605daf8e771abc86bff00b40d042367661"
	I0429 11:55:40.688185  861457 cri.go:89] found id: ""
	I0429 11:55:40.688243  861457 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0429 11:55:40.821617  861457 main.go:141] libmachine: Making call to close driver server
	I0429 11:55:40.821656  861457 main.go:141] libmachine: (addons-399337) Calling .Close
	I0429 11:55:40.821984  861457 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:55:40.822005  861457 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:55:40.824162  861457 out.go:177] 
	W0429 11:55:40.825532  861457 out.go:239] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-29T11:55:40Z" level=error msg="stat /run/containerd/runc/k8s.io/9959552d53865499547bd8826ca5d2d41f4502a02a0c9095e9645bfbaeff3bec: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-29T11:55:40Z" level=error msg="stat /run/containerd/runc/k8s.io/9959552d53865499547bd8826ca5d2d41f4502a02a0c9095e9645bfbaeff3bec: no such file or directory"
	
	W0429 11:55:40.825551  861457 out.go:239] * 
	* 
	W0429 11:55:40.829184  861457 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_6f112806b36003b4c7cc9d1475fa654343463182_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_6f112806b36003b4c7cc9d1475fa654343463182_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 11:55:40.830567  861457 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:492: failed disabling helm-tiller addon. arg "out/minikube-linux-amd64 -p addons-399337 addons disable helm-tiller --alsologtostderr -v=1".s exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-399337 -n addons-399337
helpers_test.go:244: <<< TestAddons/parallel/HelmTiller FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/HelmTiller]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-399337 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-399337 logs -n 25: (2.148527506s)
helpers_test.go:252: TestAddons/parallel/HelmTiller logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-158460 | jenkins | v1.33.0 | 29 Apr 24 11:52 UTC |                     |
	|         | -p download-only-158460              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.33.0 | 29 Apr 24 11:52 UTC | 29 Apr 24 11:52 UTC |
	| delete  | -p download-only-158460              | download-only-158460 | jenkins | v1.33.0 | 29 Apr 24 11:52 UTC | 29 Apr 24 11:52 UTC |
	| start   | -o=json --download-only              | download-only-509997 | jenkins | v1.33.0 | 29 Apr 24 11:52 UTC |                     |
	|         | -p download-only-509997              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.33.0 | 29 Apr 24 11:52 UTC | 29 Apr 24 11:52 UTC |
	| delete  | -p download-only-509997              | download-only-509997 | jenkins | v1.33.0 | 29 Apr 24 11:52 UTC | 29 Apr 24 11:52 UTC |
	| delete  | -p download-only-158460              | download-only-158460 | jenkins | v1.33.0 | 29 Apr 24 11:52 UTC | 29 Apr 24 11:52 UTC |
	| delete  | -p download-only-509997              | download-only-509997 | jenkins | v1.33.0 | 29 Apr 24 11:52 UTC | 29 Apr 24 11:52 UTC |
	| start   | --download-only -p                   | binary-mirror-540254 | jenkins | v1.33.0 | 29 Apr 24 11:52 UTC |                     |
	|         | binary-mirror-540254                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:38627               |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-540254              | binary-mirror-540254 | jenkins | v1.33.0 | 29 Apr 24 11:52 UTC | 29 Apr 24 11:52 UTC |
	| addons  | disable dashboard -p                 | addons-399337        | jenkins | v1.33.0 | 29 Apr 24 11:52 UTC |                     |
	|         | addons-399337                        |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-399337        | jenkins | v1.33.0 | 29 Apr 24 11:52 UTC |                     |
	|         | addons-399337                        |                      |         |         |                     |                     |
	| start   | -p addons-399337 --wait=true         | addons-399337        | jenkins | v1.33.0 | 29 Apr 24 11:52 UTC | 29 Apr 24 11:55 UTC |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2          |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --addons=ingress                     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p             | addons-399337        | jenkins | v1.33.0 | 29 Apr 24 11:55 UTC | 29 Apr 24 11:55 UTC |
	|         | addons-399337                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-399337        | jenkins | v1.33.0 | 29 Apr 24 11:55 UTC | 29 Apr 24 11:55 UTC |
	|         | -p addons-399337                     |                      |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-399337        | jenkins | v1.33.0 | 29 Apr 24 11:55 UTC | 29 Apr 24 11:55 UTC |
	|         | -p addons-399337                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-399337 addons disable         | addons-399337        | jenkins | v1.33.0 | 29 Apr 24 11:55 UTC |                     |
	|         | helm-tiller --alsologtostderr        |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 11:52:58
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 11:52:58.599052  860437 out.go:291] Setting OutFile to fd 1 ...
	I0429 11:52:58.599321  860437 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:52:58.599331  860437 out.go:304] Setting ErrFile to fd 2...
	I0429 11:52:58.599335  860437 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:52:58.599527  860437 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-852552/.minikube/bin
	I0429 11:52:58.600181  860437 out.go:298] Setting JSON to false
	I0429 11:52:58.601137  860437 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":5727,"bootTime":1714385852,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 11:52:58.601206  860437 start.go:139] virtualization: kvm guest
	I0429 11:52:58.603334  860437 out.go:177] * [addons-399337] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 11:52:58.604762  860437 out.go:177]   - MINIKUBE_LOCATION=18773
	I0429 11:52:58.604772  860437 notify.go:220] Checking for updates...
	I0429 11:52:58.607496  860437 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 11:52:58.609149  860437 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18773-852552/kubeconfig
	I0429 11:52:58.610417  860437 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18773-852552/.minikube
	I0429 11:52:58.611685  860437 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 11:52:58.612847  860437 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 11:52:58.614231  860437 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 11:52:58.647052  860437 out.go:177] * Using the kvm2 driver based on user configuration
	I0429 11:52:58.648398  860437 start.go:297] selected driver: kvm2
	I0429 11:52:58.648418  860437 start.go:901] validating driver "kvm2" against <nil>
	I0429 11:52:58.648433  860437 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 11:52:58.649390  860437 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 11:52:58.649488  860437 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18773-852552/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 11:52:58.664795  860437 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0429 11:52:58.664880  860437 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 11:52:58.665105  860437 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 11:52:58.665164  860437 cni.go:84] Creating CNI manager for ""
	I0429 11:52:58.665177  860437 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0429 11:52:58.665185  860437 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 11:52:58.665249  860437 start.go:340] cluster config:
	{Name:addons-399337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-399337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
ontainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 11:52:58.665374  860437 iso.go:125] acquiring lock: {Name:mk8b8ddae761cd3484839905e26ad9b8e12585e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 11:52:58.667254  860437 out.go:177] * Starting "addons-399337" primary control-plane node in "addons-399337" cluster
	I0429 11:52:58.668533  860437 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime containerd
	I0429 11:52:58.668580  860437 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18773-852552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-containerd-overlay2-amd64.tar.lz4
	I0429 11:52:58.668589  860437 cache.go:56] Caching tarball of preloaded images
	I0429 11:52:58.668716  860437 preload.go:173] Found /home/jenkins/minikube-integration/18773-852552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 11:52:58.668730  860437 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on containerd
	I0429 11:52:58.669114  860437 profile.go:143] Saving config to /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/config.json ...
	I0429 11:52:58.669148  860437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/config.json: {Name:mkef248842283f243f00cd57751efa7b8f8ae6e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:52:58.669342  860437 start.go:360] acquireMachinesLock for addons-399337: {Name:mk82f307343a5b6f09c1925b170bfd071eaae56a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 11:52:58.669402  860437 start.go:364] duration metric: took 43.033µs to acquireMachinesLock for "addons-399337"
	I0429 11:52:58.669427  860437 start.go:93] Provisioning new machine with config: &{Name:addons-399337 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:addons-399337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0429 11:52:58.669500  860437 start.go:125] createHost starting for "" (driver="kvm2")
	I0429 11:52:58.671301  860437 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0429 11:52:58.671594  860437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 11:52:58.671639  860437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:52:58.686470  860437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34731
	I0429 11:52:58.686902  860437 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:52:58.687527  860437 main.go:141] libmachine: Using API Version  1
	I0429 11:52:58.687548  860437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:52:58.687855  860437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:52:58.688028  860437 main.go:141] libmachine: (addons-399337) Calling .GetMachineName
	I0429 11:52:58.688222  860437 main.go:141] libmachine: (addons-399337) Calling .DriverName
	I0429 11:52:58.688374  860437 start.go:159] libmachine.API.Create for "addons-399337" (driver="kvm2")
	I0429 11:52:58.688405  860437 client.go:168] LocalClient.Create starting
	I0429 11:52:58.688442  860437 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18773-852552/.minikube/certs/ca.pem
	I0429 11:52:58.994964  860437 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18773-852552/.minikube/certs/cert.pem
	I0429 11:52:59.264072  860437 main.go:141] libmachine: Running pre-create checks...
	I0429 11:52:59.264111  860437 main.go:141] libmachine: (addons-399337) Calling .PreCreateCheck
	I0429 11:52:59.264731  860437 main.go:141] libmachine: (addons-399337) Calling .GetConfigRaw
	I0429 11:52:59.265307  860437 main.go:141] libmachine: Creating machine...
	I0429 11:52:59.265325  860437 main.go:141] libmachine: (addons-399337) Calling .Create
	I0429 11:52:59.265467  860437 main.go:141] libmachine: (addons-399337) Creating KVM machine...
	I0429 11:52:59.266736  860437 main.go:141] libmachine: (addons-399337) DBG | found existing default KVM network
	I0429 11:52:59.267475  860437 main.go:141] libmachine: (addons-399337) DBG | I0429 11:52:59.267318  860459 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0429 11:52:59.267505  860437 main.go:141] libmachine: (addons-399337) DBG | created network xml: 
	I0429 11:52:59.267523  860437 main.go:141] libmachine: (addons-399337) DBG | <network>
	I0429 11:52:59.267541  860437 main.go:141] libmachine: (addons-399337) DBG |   <name>mk-addons-399337</name>
	I0429 11:52:59.267549  860437 main.go:141] libmachine: (addons-399337) DBG |   <dns enable='no'/>
	I0429 11:52:59.267553  860437 main.go:141] libmachine: (addons-399337) DBG |   
	I0429 11:52:59.267561  860437 main.go:141] libmachine: (addons-399337) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0429 11:52:59.267566  860437 main.go:141] libmachine: (addons-399337) DBG |     <dhcp>
	I0429 11:52:59.267573  860437 main.go:141] libmachine: (addons-399337) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0429 11:52:59.267581  860437 main.go:141] libmachine: (addons-399337) DBG |     </dhcp>
	I0429 11:52:59.267588  860437 main.go:141] libmachine: (addons-399337) DBG |   </ip>
	I0429 11:52:59.267595  860437 main.go:141] libmachine: (addons-399337) DBG |   
	I0429 11:52:59.267602  860437 main.go:141] libmachine: (addons-399337) DBG | </network>
	I0429 11:52:59.267613  860437 main.go:141] libmachine: (addons-399337) DBG | 
	I0429 11:52:59.273229  860437 main.go:141] libmachine: (addons-399337) DBG | trying to create private KVM network mk-addons-399337 192.168.39.0/24...
	I0429 11:52:59.340118  860437 main.go:141] libmachine: (addons-399337) DBG | private KVM network mk-addons-399337 192.168.39.0/24 created
	I0429 11:52:59.340211  860437 main.go:141] libmachine: (addons-399337) DBG | I0429 11:52:59.340068  860459 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18773-852552/.minikube
	I0429 11:52:59.340246  860437 main.go:141] libmachine: (addons-399337) Setting up store path in /home/jenkins/minikube-integration/18773-852552/.minikube/machines/addons-399337 ...
	I0429 11:52:59.340308  860437 main.go:141] libmachine: (addons-399337) Building disk image from file:///home/jenkins/minikube-integration/18773-852552/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0429 11:52:59.340342  860437 main.go:141] libmachine: (addons-399337) Downloading /home/jenkins/minikube-integration/18773-852552/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18773-852552/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 11:52:59.595241  860437 main.go:141] libmachine: (addons-399337) DBG | I0429 11:52:59.595024  860459 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18773-852552/.minikube/machines/addons-399337/id_rsa...
	I0429 11:52:59.781748  860437 main.go:141] libmachine: (addons-399337) DBG | I0429 11:52:59.781546  860459 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18773-852552/.minikube/machines/addons-399337/addons-399337.rawdisk...
	I0429 11:52:59.781789  860437 main.go:141] libmachine: (addons-399337) DBG | Writing magic tar header
	I0429 11:52:59.781805  860437 main.go:141] libmachine: (addons-399337) DBG | Writing SSH key tar header
	I0429 11:52:59.781819  860437 main.go:141] libmachine: (addons-399337) DBG | I0429 11:52:59.781712  860459 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18773-852552/.minikube/machines/addons-399337 ...
	I0429 11:52:59.781841  860437 main.go:141] libmachine: (addons-399337) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-852552/.minikube/machines/addons-399337
	I0429 11:52:59.781857  860437 main.go:141] libmachine: (addons-399337) Setting executable bit set on /home/jenkins/minikube-integration/18773-852552/.minikube/machines/addons-399337 (perms=drwx------)
	I0429 11:52:59.781872  860437 main.go:141] libmachine: (addons-399337) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-852552/.minikube/machines
	I0429 11:52:59.781885  860437 main.go:141] libmachine: (addons-399337) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-852552/.minikube
	I0429 11:52:59.781891  860437 main.go:141] libmachine: (addons-399337) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18773-852552
	I0429 11:52:59.781900  860437 main.go:141] libmachine: (addons-399337) Setting executable bit set on /home/jenkins/minikube-integration/18773-852552/.minikube/machines (perms=drwxr-xr-x)
	I0429 11:52:59.781906  860437 main.go:141] libmachine: (addons-399337) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0429 11:52:59.781916  860437 main.go:141] libmachine: (addons-399337) Setting executable bit set on /home/jenkins/minikube-integration/18773-852552/.minikube (perms=drwxr-xr-x)
	I0429 11:52:59.781929  860437 main.go:141] libmachine: (addons-399337) Setting executable bit set on /home/jenkins/minikube-integration/18773-852552 (perms=drwxrwxr-x)
	I0429 11:52:59.781939  860437 main.go:141] libmachine: (addons-399337) DBG | Checking permissions on dir: /home/jenkins
	I0429 11:52:59.781948  860437 main.go:141] libmachine: (addons-399337) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0429 11:52:59.781961  860437 main.go:141] libmachine: (addons-399337) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0429 11:52:59.781966  860437 main.go:141] libmachine: (addons-399337) Creating domain...
	I0429 11:52:59.781988  860437 main.go:141] libmachine: (addons-399337) DBG | Checking permissions on dir: /home
	I0429 11:52:59.782004  860437 main.go:141] libmachine: (addons-399337) DBG | Skipping /home - not owner
	I0429 11:52:59.783163  860437 main.go:141] libmachine: (addons-399337) define libvirt domain using xml: 
	I0429 11:52:59.783180  860437 main.go:141] libmachine: (addons-399337) <domain type='kvm'>
	I0429 11:52:59.783186  860437 main.go:141] libmachine: (addons-399337)   <name>addons-399337</name>
	I0429 11:52:59.783191  860437 main.go:141] libmachine: (addons-399337)   <memory unit='MiB'>4000</memory>
	I0429 11:52:59.783196  860437 main.go:141] libmachine: (addons-399337)   <vcpu>2</vcpu>
	I0429 11:52:59.783200  860437 main.go:141] libmachine: (addons-399337)   <features>
	I0429 11:52:59.783205  860437 main.go:141] libmachine: (addons-399337)     <acpi/>
	I0429 11:52:59.783209  860437 main.go:141] libmachine: (addons-399337)     <apic/>
	I0429 11:52:59.783214  860437 main.go:141] libmachine: (addons-399337)     <pae/>
	I0429 11:52:59.783218  860437 main.go:141] libmachine: (addons-399337)     
	I0429 11:52:59.783228  860437 main.go:141] libmachine: (addons-399337)   </features>
	I0429 11:52:59.783232  860437 main.go:141] libmachine: (addons-399337)   <cpu mode='host-passthrough'>
	I0429 11:52:59.783237  860437 main.go:141] libmachine: (addons-399337)   
	I0429 11:52:59.783255  860437 main.go:141] libmachine: (addons-399337)   </cpu>
	I0429 11:52:59.783268  860437 main.go:141] libmachine: (addons-399337)   <os>
	I0429 11:52:59.783277  860437 main.go:141] libmachine: (addons-399337)     <type>hvm</type>
	I0429 11:52:59.783286  860437 main.go:141] libmachine: (addons-399337)     <boot dev='cdrom'/>
	I0429 11:52:59.783302  860437 main.go:141] libmachine: (addons-399337)     <boot dev='hd'/>
	I0429 11:52:59.783310  860437 main.go:141] libmachine: (addons-399337)     <bootmenu enable='no'/>
	I0429 11:52:59.783315  860437 main.go:141] libmachine: (addons-399337)   </os>
	I0429 11:52:59.783320  860437 main.go:141] libmachine: (addons-399337)   <devices>
	I0429 11:52:59.783332  860437 main.go:141] libmachine: (addons-399337)     <disk type='file' device='cdrom'>
	I0429 11:52:59.783347  860437 main.go:141] libmachine: (addons-399337)       <source file='/home/jenkins/minikube-integration/18773-852552/.minikube/machines/addons-399337/boot2docker.iso'/>
	I0429 11:52:59.783360  860437 main.go:141] libmachine: (addons-399337)       <target dev='hdc' bus='scsi'/>
	I0429 11:52:59.783369  860437 main.go:141] libmachine: (addons-399337)       <readonly/>
	I0429 11:52:59.783377  860437 main.go:141] libmachine: (addons-399337)     </disk>
	I0429 11:52:59.783383  860437 main.go:141] libmachine: (addons-399337)     <disk type='file' device='disk'>
	I0429 11:52:59.783391  860437 main.go:141] libmachine: (addons-399337)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0429 11:52:59.783399  860437 main.go:141] libmachine: (addons-399337)       <source file='/home/jenkins/minikube-integration/18773-852552/.minikube/machines/addons-399337/addons-399337.rawdisk'/>
	I0429 11:52:59.783406  860437 main.go:141] libmachine: (addons-399337)       <target dev='hda' bus='virtio'/>
	I0429 11:52:59.783414  860437 main.go:141] libmachine: (addons-399337)     </disk>
	I0429 11:52:59.783425  860437 main.go:141] libmachine: (addons-399337)     <interface type='network'>
	I0429 11:52:59.783444  860437 main.go:141] libmachine: (addons-399337)       <source network='mk-addons-399337'/>
	I0429 11:52:59.783457  860437 main.go:141] libmachine: (addons-399337)       <model type='virtio'/>
	I0429 11:52:59.783462  860437 main.go:141] libmachine: (addons-399337)     </interface>
	I0429 11:52:59.783467  860437 main.go:141] libmachine: (addons-399337)     <interface type='network'>
	I0429 11:52:59.783472  860437 main.go:141] libmachine: (addons-399337)       <source network='default'/>
	I0429 11:52:59.783480  860437 main.go:141] libmachine: (addons-399337)       <model type='virtio'/>
	I0429 11:52:59.783485  860437 main.go:141] libmachine: (addons-399337)     </interface>
	I0429 11:52:59.783492  860437 main.go:141] libmachine: (addons-399337)     <serial type='pty'>
	I0429 11:52:59.783498  860437 main.go:141] libmachine: (addons-399337)       <target port='0'/>
	I0429 11:52:59.783505  860437 main.go:141] libmachine: (addons-399337)     </serial>
	I0429 11:52:59.783510  860437 main.go:141] libmachine: (addons-399337)     <console type='pty'>
	I0429 11:52:59.783517  860437 main.go:141] libmachine: (addons-399337)       <target type='serial' port='0'/>
	I0429 11:52:59.783523  860437 main.go:141] libmachine: (addons-399337)     </console>
	I0429 11:52:59.783529  860437 main.go:141] libmachine: (addons-399337)     <rng model='virtio'>
	I0429 11:52:59.783540  860437 main.go:141] libmachine: (addons-399337)       <backend model='random'>/dev/random</backend>
	I0429 11:52:59.783548  860437 main.go:141] libmachine: (addons-399337)     </rng>
	I0429 11:52:59.783553  860437 main.go:141] libmachine: (addons-399337)     
	I0429 11:52:59.783563  860437 main.go:141] libmachine: (addons-399337)     
	I0429 11:52:59.783600  860437 main.go:141] libmachine: (addons-399337)   </devices>
	I0429 11:52:59.783624  860437 main.go:141] libmachine: (addons-399337) </domain>
	I0429 11:52:59.783634  860437 main.go:141] libmachine: (addons-399337) 
	I0429 11:52:59.789360  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:4f:17:5b in network default
	I0429 11:52:59.789941  860437 main.go:141] libmachine: (addons-399337) Ensuring networks are active...
	I0429 11:52:59.789963  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:52:59.790618  860437 main.go:141] libmachine: (addons-399337) Ensuring network default is active
	I0429 11:52:59.790947  860437 main.go:141] libmachine: (addons-399337) Ensuring network mk-addons-399337 is active
	I0429 11:52:59.792398  860437 main.go:141] libmachine: (addons-399337) Getting domain xml...
	I0429 11:52:59.793107  860437 main.go:141] libmachine: (addons-399337) Creating domain...
	I0429 11:53:01.166057  860437 main.go:141] libmachine: (addons-399337) Waiting to get IP...
	I0429 11:53:01.166884  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:01.167327  860437 main.go:141] libmachine: (addons-399337) DBG | unable to find current IP address of domain addons-399337 in network mk-addons-399337
	I0429 11:53:01.167382  860437 main.go:141] libmachine: (addons-399337) DBG | I0429 11:53:01.167316  860459 retry.go:31] will retry after 293.938617ms: waiting for machine to come up
	I0429 11:53:01.463013  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:01.463500  860437 main.go:141] libmachine: (addons-399337) DBG | unable to find current IP address of domain addons-399337 in network mk-addons-399337
	I0429 11:53:01.463527  860437 main.go:141] libmachine: (addons-399337) DBG | I0429 11:53:01.463444  860459 retry.go:31] will retry after 344.342569ms: waiting for machine to come up
	I0429 11:53:01.809007  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:01.809438  860437 main.go:141] libmachine: (addons-399337) DBG | unable to find current IP address of domain addons-399337 in network mk-addons-399337
	I0429 11:53:01.809467  860437 main.go:141] libmachine: (addons-399337) DBG | I0429 11:53:01.809369  860459 retry.go:31] will retry after 337.136423ms: waiting for machine to come up
	I0429 11:53:02.147987  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:02.148347  860437 main.go:141] libmachine: (addons-399337) DBG | unable to find current IP address of domain addons-399337 in network mk-addons-399337
	I0429 11:53:02.148382  860437 main.go:141] libmachine: (addons-399337) DBG | I0429 11:53:02.148294  860459 retry.go:31] will retry after 550.304404ms: waiting for machine to come up
	I0429 11:53:02.699881  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:02.700429  860437 main.go:141] libmachine: (addons-399337) DBG | unable to find current IP address of domain addons-399337 in network mk-addons-399337
	I0429 11:53:02.700460  860437 main.go:141] libmachine: (addons-399337) DBG | I0429 11:53:02.700368  860459 retry.go:31] will retry after 745.286152ms: waiting for machine to come up
	I0429 11:53:03.446812  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:03.447260  860437 main.go:141] libmachine: (addons-399337) DBG | unable to find current IP address of domain addons-399337 in network mk-addons-399337
	I0429 11:53:03.447282  860437 main.go:141] libmachine: (addons-399337) DBG | I0429 11:53:03.447233  860459 retry.go:31] will retry after 812.325845ms: waiting for machine to come up
	I0429 11:53:04.261003  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:04.261430  860437 main.go:141] libmachine: (addons-399337) DBG | unable to find current IP address of domain addons-399337 in network mk-addons-399337
	I0429 11:53:04.261455  860437 main.go:141] libmachine: (addons-399337) DBG | I0429 11:53:04.261384  860459 retry.go:31] will retry after 883.903328ms: waiting for machine to come up
	I0429 11:53:05.146575  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:05.146990  860437 main.go:141] libmachine: (addons-399337) DBG | unable to find current IP address of domain addons-399337 in network mk-addons-399337
	I0429 11:53:05.147023  860437 main.go:141] libmachine: (addons-399337) DBG | I0429 11:53:05.146952  860459 retry.go:31] will retry after 1.2896153s: waiting for machine to come up
	I0429 11:53:06.438439  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:06.438885  860437 main.go:141] libmachine: (addons-399337) DBG | unable to find current IP address of domain addons-399337 in network mk-addons-399337
	I0429 11:53:06.438910  860437 main.go:141] libmachine: (addons-399337) DBG | I0429 11:53:06.438858  860459 retry.go:31] will retry after 1.831888385s: waiting for machine to come up
	I0429 11:53:08.271930  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:08.272368  860437 main.go:141] libmachine: (addons-399337) DBG | unable to find current IP address of domain addons-399337 in network mk-addons-399337
	I0429 11:53:08.272394  860437 main.go:141] libmachine: (addons-399337) DBG | I0429 11:53:08.272318  860459 retry.go:31] will retry after 1.400510255s: waiting for machine to come up
	I0429 11:53:09.674818  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:09.675252  860437 main.go:141] libmachine: (addons-399337) DBG | unable to find current IP address of domain addons-399337 in network mk-addons-399337
	I0429 11:53:09.675287  860437 main.go:141] libmachine: (addons-399337) DBG | I0429 11:53:09.675198  860459 retry.go:31] will retry after 2.141637758s: waiting for machine to come up
	I0429 11:53:11.819787  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:11.820271  860437 main.go:141] libmachine: (addons-399337) DBG | unable to find current IP address of domain addons-399337 in network mk-addons-399337
	I0429 11:53:11.820301  860437 main.go:141] libmachine: (addons-399337) DBG | I0429 11:53:11.820208  860459 retry.go:31] will retry after 3.49143358s: waiting for machine to come up
	I0429 11:53:15.313725  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:15.314125  860437 main.go:141] libmachine: (addons-399337) DBG | unable to find current IP address of domain addons-399337 in network mk-addons-399337
	I0429 11:53:15.314152  860437 main.go:141] libmachine: (addons-399337) DBG | I0429 11:53:15.314070  860459 retry.go:31] will retry after 3.198153818s: waiting for machine to come up
	I0429 11:53:18.516394  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:18.516819  860437 main.go:141] libmachine: (addons-399337) DBG | unable to find current IP address of domain addons-399337 in network mk-addons-399337
	I0429 11:53:18.516850  860437 main.go:141] libmachine: (addons-399337) DBG | I0429 11:53:18.516798  860459 retry.go:31] will retry after 5.584110166s: waiting for machine to come up
	I0429 11:53:24.102127  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:24.102635  860437 main.go:141] libmachine: (addons-399337) Found IP for machine: 192.168.39.246
	I0429 11:53:24.102658  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has current primary IP address 192.168.39.246 and MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:24.102664  860437 main.go:141] libmachine: (addons-399337) Reserving static IP address...
	I0429 11:53:24.103034  860437 main.go:141] libmachine: (addons-399337) DBG | unable to find host DHCP lease matching {name: "addons-399337", mac: "52:54:00:eb:57:e6", ip: "192.168.39.246"} in network mk-addons-399337
	I0429 11:53:24.175898  860437 main.go:141] libmachine: (addons-399337) Reserved static IP address: 192.168.39.246
	I0429 11:53:24.175933  860437 main.go:141] libmachine: (addons-399337) Waiting for SSH to be available...
	I0429 11:53:24.175944  860437 main.go:141] libmachine: (addons-399337) DBG | Getting to WaitForSSH function...
	I0429 11:53:24.178257  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:24.178628  860437 main.go:141] libmachine: (addons-399337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:e6", ip: ""} in network mk-addons-399337: {Iface:virbr1 ExpiryTime:2024-04-29 12:53:14 +0000 UTC Type:0 Mac:52:54:00:eb:57:e6 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:minikube Clientid:01:52:54:00:eb:57:e6}
	I0429 11:53:24.178660  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined IP address 192.168.39.246 and MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:24.178827  860437 main.go:141] libmachine: (addons-399337) DBG | Using SSH client type: external
	I0429 11:53:24.178876  860437 main.go:141] libmachine: (addons-399337) DBG | Using SSH private key: /home/jenkins/minikube-integration/18773-852552/.minikube/machines/addons-399337/id_rsa (-rw-------)
	I0429 11:53:24.178941  860437 main.go:141] libmachine: (addons-399337) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.246 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18773-852552/.minikube/machines/addons-399337/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 11:53:24.178967  860437 main.go:141] libmachine: (addons-399337) DBG | About to run SSH command:
	I0429 11:53:24.178983  860437 main.go:141] libmachine: (addons-399337) DBG | exit 0
	I0429 11:53:24.302454  860437 main.go:141] libmachine: (addons-399337) DBG | SSH cmd err, output: <nil>: 
	I0429 11:53:24.302711  860437 main.go:141] libmachine: (addons-399337) KVM machine creation complete!
	I0429 11:53:24.303094  860437 main.go:141] libmachine: (addons-399337) Calling .GetConfigRaw
	I0429 11:53:24.303695  860437 main.go:141] libmachine: (addons-399337) Calling .DriverName
	I0429 11:53:24.303971  860437 main.go:141] libmachine: (addons-399337) Calling .DriverName
	I0429 11:53:24.304184  860437 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0429 11:53:24.304202  860437 main.go:141] libmachine: (addons-399337) Calling .GetState
	I0429 11:53:24.305699  860437 main.go:141] libmachine: Detecting operating system of created instance...
	I0429 11:53:24.305718  860437 main.go:141] libmachine: Waiting for SSH to be available...
	I0429 11:53:24.305725  860437 main.go:141] libmachine: Getting to WaitForSSH function...
	I0429 11:53:24.305732  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHHostname
	I0429 11:53:24.307755  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:24.308079  860437 main.go:141] libmachine: (addons-399337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:e6", ip: ""} in network mk-addons-399337: {Iface:virbr1 ExpiryTime:2024-04-29 12:53:14 +0000 UTC Type:0 Mac:52:54:00:eb:57:e6 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:addons-399337 Clientid:01:52:54:00:eb:57:e6}
	I0429 11:53:24.308099  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined IP address 192.168.39.246 and MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:24.308270  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHPort
	I0429 11:53:24.308453  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHKeyPath
	I0429 11:53:24.308613  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHKeyPath
	I0429 11:53:24.308749  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHUsername
	I0429 11:53:24.308939  860437 main.go:141] libmachine: Using SSH client type: native
	I0429 11:53:24.309194  860437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0429 11:53:24.309209  860437 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0429 11:53:24.409553  860437 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 11:53:24.409580  860437 main.go:141] libmachine: Detecting the provisioner...
	I0429 11:53:24.409609  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHHostname
	I0429 11:53:24.412678  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:24.413018  860437 main.go:141] libmachine: (addons-399337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:e6", ip: ""} in network mk-addons-399337: {Iface:virbr1 ExpiryTime:2024-04-29 12:53:14 +0000 UTC Type:0 Mac:52:54:00:eb:57:e6 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:addons-399337 Clientid:01:52:54:00:eb:57:e6}
	I0429 11:53:24.413044  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined IP address 192.168.39.246 and MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:24.413223  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHPort
	I0429 11:53:24.413453  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHKeyPath
	I0429 11:53:24.413627  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHKeyPath
	I0429 11:53:24.413789  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHUsername
	I0429 11:53:24.413970  860437 main.go:141] libmachine: Using SSH client type: native
	I0429 11:53:24.414176  860437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0429 11:53:24.414188  860437 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0429 11:53:24.515046  860437 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0429 11:53:24.515151  860437 main.go:141] libmachine: found compatible host: buildroot
	I0429 11:53:24.515167  860437 main.go:141] libmachine: Provisioning with buildroot...
	I0429 11:53:24.515174  860437 main.go:141] libmachine: (addons-399337) Calling .GetMachineName
	I0429 11:53:24.515441  860437 buildroot.go:166] provisioning hostname "addons-399337"
	I0429 11:53:24.515480  860437 main.go:141] libmachine: (addons-399337) Calling .GetMachineName
	I0429 11:53:24.515714  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHHostname
	I0429 11:53:24.518569  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:24.519125  860437 main.go:141] libmachine: (addons-399337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:e6", ip: ""} in network mk-addons-399337: {Iface:virbr1 ExpiryTime:2024-04-29 12:53:14 +0000 UTC Type:0 Mac:52:54:00:eb:57:e6 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:addons-399337 Clientid:01:52:54:00:eb:57:e6}
	I0429 11:53:24.519157  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined IP address 192.168.39.246 and MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:24.519321  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHPort
	I0429 11:53:24.519538  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHKeyPath
	I0429 11:53:24.519708  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHKeyPath
	I0429 11:53:24.519845  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHUsername
	I0429 11:53:24.519975  860437 main.go:141] libmachine: Using SSH client type: native
	I0429 11:53:24.520169  860437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0429 11:53:24.520183  860437 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-399337 && echo "addons-399337" | sudo tee /etc/hostname
	I0429 11:53:24.632775  860437 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-399337
	
	I0429 11:53:24.632814  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHHostname
	I0429 11:53:24.635475  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:24.635813  860437 main.go:141] libmachine: (addons-399337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:e6", ip: ""} in network mk-addons-399337: {Iface:virbr1 ExpiryTime:2024-04-29 12:53:14 +0000 UTC Type:0 Mac:52:54:00:eb:57:e6 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:addons-399337 Clientid:01:52:54:00:eb:57:e6}
	I0429 11:53:24.635852  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined IP address 192.168.39.246 and MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:24.636021  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHPort
	I0429 11:53:24.636246  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHKeyPath
	I0429 11:53:24.636429  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHKeyPath
	I0429 11:53:24.636550  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHUsername
	I0429 11:53:24.636686  860437 main.go:141] libmachine: Using SSH client type: native
	I0429 11:53:24.636889  860437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0429 11:53:24.636909  860437 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-399337' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-399337/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-399337' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 11:53:24.743789  860437 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 11:53:24.743823  860437 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18773-852552/.minikube CaCertPath:/home/jenkins/minikube-integration/18773-852552/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18773-852552/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18773-852552/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18773-852552/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18773-852552/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18773-852552/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18773-852552/.minikube}
	I0429 11:53:24.743904  860437 buildroot.go:174] setting up certificates
	I0429 11:53:24.743926  860437 provision.go:84] configureAuth start
	I0429 11:53:24.743944  860437 main.go:141] libmachine: (addons-399337) Calling .GetMachineName
	I0429 11:53:24.744284  860437 main.go:141] libmachine: (addons-399337) Calling .GetIP
	I0429 11:53:24.746820  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:24.747261  860437 main.go:141] libmachine: (addons-399337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:e6", ip: ""} in network mk-addons-399337: {Iface:virbr1 ExpiryTime:2024-04-29 12:53:14 +0000 UTC Type:0 Mac:52:54:00:eb:57:e6 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:addons-399337 Clientid:01:52:54:00:eb:57:e6}
	I0429 11:53:24.747315  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined IP address 192.168.39.246 and MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:24.747391  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHHostname
	I0429 11:53:24.749752  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:24.750149  860437 main.go:141] libmachine: (addons-399337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:e6", ip: ""} in network mk-addons-399337: {Iface:virbr1 ExpiryTime:2024-04-29 12:53:14 +0000 UTC Type:0 Mac:52:54:00:eb:57:e6 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:addons-399337 Clientid:01:52:54:00:eb:57:e6}
	I0429 11:53:24.750183  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined IP address 192.168.39.246 and MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:24.750356  860437 provision.go:143] copyHostCerts
	I0429 11:53:24.750430  860437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-852552/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18773-852552/.minikube/ca.pem (1078 bytes)
	I0429 11:53:24.750557  860437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-852552/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18773-852552/.minikube/cert.pem (1123 bytes)
	I0429 11:53:24.750630  860437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18773-852552/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18773-852552/.minikube/key.pem (1679 bytes)
	I0429 11:53:24.750695  860437 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18773-852552/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18773-852552/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18773-852552/.minikube/certs/ca-key.pem org=jenkins.addons-399337 san=[127.0.0.1 192.168.39.246 addons-399337 localhost minikube]
	I0429 11:53:24.973131  860437 provision.go:177] copyRemoteCerts
	I0429 11:53:24.973200  860437 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 11:53:24.973227  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHHostname
	I0429 11:53:24.976078  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:24.976399  860437 main.go:141] libmachine: (addons-399337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:e6", ip: ""} in network mk-addons-399337: {Iface:virbr1 ExpiryTime:2024-04-29 12:53:14 +0000 UTC Type:0 Mac:52:54:00:eb:57:e6 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:addons-399337 Clientid:01:52:54:00:eb:57:e6}
	I0429 11:53:24.976428  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined IP address 192.168.39.246 and MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:24.976707  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHPort
	I0429 11:53:24.976936  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHKeyPath
	I0429 11:53:24.977102  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHUsername
	I0429 11:53:24.977262  860437 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-852552/.minikube/machines/addons-399337/id_rsa Username:docker}
	I0429 11:53:25.056668  860437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-852552/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 11:53:25.082212  860437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-852552/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0429 11:53:25.107353  860437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-852552/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 11:53:25.132315  860437 provision.go:87] duration metric: took 388.373839ms to configureAuth
	I0429 11:53:25.132354  860437 buildroot.go:189] setting minikube options for container-runtime
	I0429 11:53:25.132593  860437 config.go:182] Loaded profile config "addons-399337": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0429 11:53:25.132628  860437 main.go:141] libmachine: Checking connection to Docker...
	I0429 11:53:25.132640  860437 main.go:141] libmachine: (addons-399337) Calling .GetURL
	I0429 11:53:25.134038  860437 main.go:141] libmachine: (addons-399337) DBG | Using libvirt version 6000000
	I0429 11:53:25.136114  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:25.136554  860437 main.go:141] libmachine: (addons-399337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:e6", ip: ""} in network mk-addons-399337: {Iface:virbr1 ExpiryTime:2024-04-29 12:53:14 +0000 UTC Type:0 Mac:52:54:00:eb:57:e6 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:addons-399337 Clientid:01:52:54:00:eb:57:e6}
	I0429 11:53:25.136583  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined IP address 192.168.39.246 and MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:25.136888  860437 main.go:141] libmachine: Docker is up and running!
	I0429 11:53:25.136905  860437 main.go:141] libmachine: Reticulating splines...
	I0429 11:53:25.136919  860437 client.go:171] duration metric: took 26.448497182s to LocalClient.Create
	I0429 11:53:25.136951  860437 start.go:167] duration metric: took 26.448578032s to libmachine.API.Create "addons-399337"
	I0429 11:53:25.136966  860437 start.go:293] postStartSetup for "addons-399337" (driver="kvm2")
	I0429 11:53:25.136979  860437 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 11:53:25.136999  860437 main.go:141] libmachine: (addons-399337) Calling .DriverName
	I0429 11:53:25.137299  860437 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 11:53:25.137325  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHHostname
	I0429 11:53:25.139647  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:25.139931  860437 main.go:141] libmachine: (addons-399337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:e6", ip: ""} in network mk-addons-399337: {Iface:virbr1 ExpiryTime:2024-04-29 12:53:14 +0000 UTC Type:0 Mac:52:54:00:eb:57:e6 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:addons-399337 Clientid:01:52:54:00:eb:57:e6}
	I0429 11:53:25.139952  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined IP address 192.168.39.246 and MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:25.140070  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHPort
	I0429 11:53:25.140268  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHKeyPath
	I0429 11:53:25.140431  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHUsername
	I0429 11:53:25.140584  860437 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-852552/.minikube/machines/addons-399337/id_rsa Username:docker}
	I0429 11:53:25.220837  860437 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 11:53:25.225536  860437 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 11:53:25.225583  860437 filesync.go:126] Scanning /home/jenkins/minikube-integration/18773-852552/.minikube/addons for local assets ...
	I0429 11:53:25.225680  860437 filesync.go:126] Scanning /home/jenkins/minikube-integration/18773-852552/.minikube/files for local assets ...
	I0429 11:53:25.225712  860437 start.go:296] duration metric: took 88.738593ms for postStartSetup
	I0429 11:53:25.225754  860437 main.go:141] libmachine: (addons-399337) Calling .GetConfigRaw
	I0429 11:53:25.226405  860437 main.go:141] libmachine: (addons-399337) Calling .GetIP
	I0429 11:53:25.228929  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:25.229289  860437 main.go:141] libmachine: (addons-399337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:e6", ip: ""} in network mk-addons-399337: {Iface:virbr1 ExpiryTime:2024-04-29 12:53:14 +0000 UTC Type:0 Mac:52:54:00:eb:57:e6 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:addons-399337 Clientid:01:52:54:00:eb:57:e6}
	I0429 11:53:25.229322  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined IP address 192.168.39.246 and MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:25.229525  860437 profile.go:143] Saving config to /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/config.json ...
	I0429 11:53:25.229717  860437 start.go:128] duration metric: took 26.560202796s to createHost
	I0429 11:53:25.229744  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHHostname
	I0429 11:53:25.231907  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:25.232246  860437 main.go:141] libmachine: (addons-399337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:e6", ip: ""} in network mk-addons-399337: {Iface:virbr1 ExpiryTime:2024-04-29 12:53:14 +0000 UTC Type:0 Mac:52:54:00:eb:57:e6 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:addons-399337 Clientid:01:52:54:00:eb:57:e6}
	I0429 11:53:25.232293  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined IP address 192.168.39.246 and MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:25.232417  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHPort
	I0429 11:53:25.232617  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHKeyPath
	I0429 11:53:25.232762  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHKeyPath
	I0429 11:53:25.232864  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHUsername
	I0429 11:53:25.233108  860437 main.go:141] libmachine: Using SSH client type: native
	I0429 11:53:25.233372  860437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0429 11:53:25.233387  860437 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 11:53:25.334980  860437 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714391605.310591158
	
	I0429 11:53:25.335014  860437 fix.go:216] guest clock: 1714391605.310591158
	I0429 11:53:25.335033  860437 fix.go:229] Guest: 2024-04-29 11:53:25.310591158 +0000 UTC Remote: 2024-04-29 11:53:25.229731967 +0000 UTC m=+26.678820167 (delta=80.859191ms)
	I0429 11:53:25.335060  860437 fix.go:200] guest clock delta is within tolerance: 80.859191ms
	I0429 11:53:25.335067  860437 start.go:83] releasing machines lock for "addons-399337", held for 26.665651968s
	I0429 11:53:25.335099  860437 main.go:141] libmachine: (addons-399337) Calling .DriverName
	I0429 11:53:25.335428  860437 main.go:141] libmachine: (addons-399337) Calling .GetIP
	I0429 11:53:25.338077  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:25.338419  860437 main.go:141] libmachine: (addons-399337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:e6", ip: ""} in network mk-addons-399337: {Iface:virbr1 ExpiryTime:2024-04-29 12:53:14 +0000 UTC Type:0 Mac:52:54:00:eb:57:e6 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:addons-399337 Clientid:01:52:54:00:eb:57:e6}
	I0429 11:53:25.338456  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined IP address 192.168.39.246 and MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:25.338604  860437 main.go:141] libmachine: (addons-399337) Calling .DriverName
	I0429 11:53:25.339161  860437 main.go:141] libmachine: (addons-399337) Calling .DriverName
	I0429 11:53:25.339326  860437 main.go:141] libmachine: (addons-399337) Calling .DriverName
	I0429 11:53:25.339426  860437 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 11:53:25.339473  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHHostname
	I0429 11:53:25.339539  860437 ssh_runner.go:195] Run: cat /version.json
	I0429 11:53:25.339557  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHHostname
	I0429 11:53:25.342301  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:25.342576  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:25.342615  860437 main.go:141] libmachine: (addons-399337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:e6", ip: ""} in network mk-addons-399337: {Iface:virbr1 ExpiryTime:2024-04-29 12:53:14 +0000 UTC Type:0 Mac:52:54:00:eb:57:e6 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:addons-399337 Clientid:01:52:54:00:eb:57:e6}
	I0429 11:53:25.342637  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined IP address 192.168.39.246 and MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:25.342757  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHPort
	I0429 11:53:25.342944  860437 main.go:141] libmachine: (addons-399337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:e6", ip: ""} in network mk-addons-399337: {Iface:virbr1 ExpiryTime:2024-04-29 12:53:14 +0000 UTC Type:0 Mac:52:54:00:eb:57:e6 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:addons-399337 Clientid:01:52:54:00:eb:57:e6}
	I0429 11:53:25.342967  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined IP address 192.168.39.246 and MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:25.342973  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHKeyPath
	I0429 11:53:25.343108  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHPort
	I0429 11:53:25.343178  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHUsername
	I0429 11:53:25.343302  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHKeyPath
	I0429 11:53:25.343374  860437 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-852552/.minikube/machines/addons-399337/id_rsa Username:docker}
	I0429 11:53:25.343462  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHUsername
	I0429 11:53:25.343613  860437 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-852552/.minikube/machines/addons-399337/id_rsa Username:docker}
	I0429 11:53:25.423841  860437 ssh_runner.go:195] Run: systemctl --version
	I0429 11:53:25.451584  860437 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 11:53:25.457885  860437 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 11:53:25.457949  860437 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 11:53:25.477752  860437 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 11:53:25.477793  860437 start.go:494] detecting cgroup driver to use...
	I0429 11:53:25.477883  860437 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 11:53:25.516071  860437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 11:53:25.534221  860437 docker.go:217] disabling cri-docker service (if available) ...
	I0429 11:53:25.534293  860437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 11:53:25.552273  860437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 11:53:25.569795  860437 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 11:53:25.702040  860437 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 11:53:25.844610  860437 docker.go:233] disabling docker service ...
	I0429 11:53:25.844690  860437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 11:53:25.859997  860437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 11:53:25.873511  860437 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 11:53:26.012957  860437 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 11:53:26.136608  860437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 11:53:26.150775  860437 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 11:53:26.169692  860437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 11:53:26.180175  860437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 11:53:26.190458  860437 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 11:53:26.190518  860437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 11:53:26.200892  860437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 11:53:26.211256  860437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 11:53:26.221819  860437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 11:53:26.231883  860437 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 11:53:26.242376  860437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 11:53:26.252970  860437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 11:53:26.263156  860437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 11:53:26.273470  860437 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 11:53:26.282913  860437 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 11:53:26.282999  860437 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 11:53:26.295731  860437 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 11:53:26.305277  860437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:53:26.426014  860437 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 11:53:26.457409  860437 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0429 11:53:26.457531  860437 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0429 11:53:26.462157  860437 retry.go:31] will retry after 1.183959615s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0429 11:53:27.646900  860437 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0429 11:53:27.652386  860437 start.go:562] Will wait 60s for crictl version
	I0429 11:53:27.652480  860437 ssh_runner.go:195] Run: which crictl
	I0429 11:53:27.656513  860437 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 11:53:27.694314  860437 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.15
	RuntimeApiVersion:  v1
	I0429 11:53:27.694408  860437 ssh_runner.go:195] Run: containerd --version
	I0429 11:53:27.726113  860437 ssh_runner.go:195] Run: containerd --version
	I0429 11:53:27.757039  860437 out.go:177] * Preparing Kubernetes v1.30.0 on containerd 1.7.15 ...
	I0429 11:53:27.758504  860437 main.go:141] libmachine: (addons-399337) Calling .GetIP
	I0429 11:53:27.761495  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:27.761858  860437 main.go:141] libmachine: (addons-399337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:e6", ip: ""} in network mk-addons-399337: {Iface:virbr1 ExpiryTime:2024-04-29 12:53:14 +0000 UTC Type:0 Mac:52:54:00:eb:57:e6 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:addons-399337 Clientid:01:52:54:00:eb:57:e6}
	I0429 11:53:27.761881  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined IP address 192.168.39.246 and MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:27.762185  860437 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 11:53:27.766756  860437 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 11:53:27.779865  860437 kubeadm.go:877] updating cluster {Name:addons-399337 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:addons-399337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 11:53:27.779993  860437 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime containerd
	I0429 11:53:27.780052  860437 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 11:53:27.813881  860437 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0429 11:53:27.813959  860437 ssh_runner.go:195] Run: which lz4
	I0429 11:53:27.818045  860437 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 11:53:27.822456  860437 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 11:53:27.822493  860437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-852552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (393937158 bytes)
	I0429 11:53:29.171794  860437 containerd.go:563] duration metric: took 1.353778604s to copy over tarball
	I0429 11:53:29.171874  860437 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 11:53:31.494617  860437 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.322705233s)
	I0429 11:53:31.494657  860437 containerd.go:570] duration metric: took 2.322829307s to extract the tarball
	I0429 11:53:31.494668  860437 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 11:53:31.533390  860437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:53:31.644143  860437 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 11:53:31.675692  860437 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 11:53:31.719746  860437 retry.go:31] will retry after 175.87278ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-29T11:53:31Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0429 11:53:31.896258  860437 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 11:53:31.935228  860437 containerd.go:627] all images are preloaded for containerd runtime.
	I0429 11:53:31.935258  860437 cache_images.go:84] Images are preloaded, skipping loading
	I0429 11:53:31.935269  860437 kubeadm.go:928] updating node { 192.168.39.246 8443 v1.30.0 containerd true true} ...
	I0429 11:53:31.935407  860437 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-399337 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:addons-399337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 11:53:31.935484  860437 ssh_runner.go:195] Run: sudo crictl info
	I0429 11:53:31.973013  860437 cni.go:84] Creating CNI manager for ""
	I0429 11:53:31.973045  860437 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0429 11:53:31.973068  860437 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 11:53:31.973099  860437 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.246 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-399337 NodeName:addons-399337 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 11:53:31.973270  860437 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-399337"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.246
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.246"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 11:53:31.973359  860437 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 11:53:31.984500  860437 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 11:53:31.984592  860437 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 11:53:31.996126  860437 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0429 11:53:32.015077  860437 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 11:53:32.034278  860437 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2173 bytes)
	I0429 11:53:32.054248  860437 ssh_runner.go:195] Run: grep 192.168.39.246	control-plane.minikube.internal$ /etc/hosts
	I0429 11:53:32.058707  860437 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.246	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 11:53:32.072584  860437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:53:32.185897  860437 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 11:53:32.208466  860437 certs.go:68] Setting up /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337 for IP: 192.168.39.246
	I0429 11:53:32.208503  860437 certs.go:194] generating shared ca certs ...
	I0429 11:53:32.208522  860437 certs.go:226] acquiring lock for ca certs: {Name:mk551d07e82040c990769e557147ad8d8d53682f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:53:32.208705  860437 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18773-852552/.minikube/ca.key
	I0429 11:53:32.486541  860437 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18773-852552/.minikube/ca.crt ...
	I0429 11:53:32.486585  860437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-852552/.minikube/ca.crt: {Name:mk577e556d53426b64025af6be3a53d6a90fb719 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:53:32.486762  860437 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18773-852552/.minikube/ca.key ...
	I0429 11:53:32.486774  860437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-852552/.minikube/ca.key: {Name:mk65086ae3e6c2b1c7f0d3664c3ccde4f0ff7326 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:53:32.486846  860437 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18773-852552/.minikube/proxy-client-ca.key
	I0429 11:53:32.606821  860437 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18773-852552/.minikube/proxy-client-ca.crt ...
	I0429 11:53:32.606852  860437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-852552/.minikube/proxy-client-ca.crt: {Name:mk49bc203e7cc591f06f0d30ecb6cf7144bb9728 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:53:32.607016  860437 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18773-852552/.minikube/proxy-client-ca.key ...
	I0429 11:53:32.607030  860437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-852552/.minikube/proxy-client-ca.key: {Name:mkb04ba3e385b693bcb7717a80cff48c0385691e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:53:32.607100  860437 certs.go:256] generating profile certs ...
	I0429 11:53:32.607166  860437 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/client.key
	I0429 11:53:32.607183  860437 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/client.crt with IP's: []
	I0429 11:53:32.808441  860437 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/client.crt ...
	I0429 11:53:32.808485  860437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/client.crt: {Name:mk0faeab4b308e64cc81e9b22ac0f7d636b81985 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:53:32.808684  860437 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/client.key ...
	I0429 11:53:32.808728  860437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/client.key: {Name:mk4e9e2fc0cbd0839a2014991696206b89c24146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:53:32.808834  860437 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/apiserver.key.40856bf7
	I0429 11:53:32.808862  860437 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/apiserver.crt.40856bf7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.246]
	I0429 11:53:33.080563  860437 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/apiserver.crt.40856bf7 ...
	I0429 11:53:33.080605  860437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/apiserver.crt.40856bf7: {Name:mk1227fdca99d876833f3352dfd85cc87bc006d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:53:33.080815  860437 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/apiserver.key.40856bf7 ...
	I0429 11:53:33.080836  860437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/apiserver.key.40856bf7: {Name:mk3f5823028b958c389b9a11c877c79a6773d7c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:53:33.080938  860437 certs.go:381] copying /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/apiserver.crt.40856bf7 -> /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/apiserver.crt
	I0429 11:53:33.081058  860437 certs.go:385] copying /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/apiserver.key.40856bf7 -> /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/apiserver.key
	I0429 11:53:33.081143  860437 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/proxy-client.key
	I0429 11:53:33.081172  860437 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/proxy-client.crt with IP's: []
	I0429 11:53:33.284704  860437 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/proxy-client.crt ...
	I0429 11:53:33.284745  860437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/proxy-client.crt: {Name:mk3a38bd7974fb02c5e13e1e251d5711181207f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:53:33.284937  860437 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/proxy-client.key ...
	I0429 11:53:33.284957  860437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/proxy-client.key: {Name:mkb170e8afa18f91ae5335e5fe4f243acc1cbdfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:53:33.285167  860437 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-852552/.minikube/certs/ca-key.pem (1679 bytes)
	I0429 11:53:33.285214  860437 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-852552/.minikube/certs/ca.pem (1078 bytes)
	I0429 11:53:33.285248  860437 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-852552/.minikube/certs/cert.pem (1123 bytes)
	I0429 11:53:33.285285  860437 certs.go:484] found cert: /home/jenkins/minikube-integration/18773-852552/.minikube/certs/key.pem (1679 bytes)
	I0429 11:53:33.285937  860437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-852552/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 11:53:33.312996  860437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-852552/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 11:53:33.338340  860437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-852552/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 11:53:33.364089  860437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-852552/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0429 11:53:33.389233  860437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0429 11:53:33.415127  860437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 11:53:33.439982  860437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 11:53:33.465533  860437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 11:53:33.490709  860437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18773-852552/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 11:53:33.516140  860437 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 11:53:33.533488  860437 ssh_runner.go:195] Run: openssl version
	I0429 11:53:33.540078  860437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 11:53:33.553047  860437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 11:53:33.558043  860437 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 11:53 /usr/share/ca-certificates/minikubeCA.pem
	I0429 11:53:33.558126  860437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 11:53:33.564353  860437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 11:53:33.576390  860437 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 11:53:33.580729  860437 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 11:53:33.580786  860437 kubeadm.go:391] StartCluster: {Name:addons-399337 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 C
lusterName:addons-399337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 11:53:33.580886  860437 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0429 11:53:33.580959  860437 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 11:53:33.619475  860437 cri.go:89] found id: ""
	I0429 11:53:33.685409  860437 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 11:53:33.697156  860437 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 11:53:33.707961  860437 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 11:53:33.718374  860437 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 11:53:33.718410  860437 kubeadm.go:156] found existing configuration files:
	
	I0429 11:53:33.718466  860437 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 11:53:33.728494  860437 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 11:53:33.728565  860437 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 11:53:33.739121  860437 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 11:53:33.749189  860437 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 11:53:33.749266  860437 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 11:53:33.760186  860437 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 11:53:33.769978  860437 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 11:53:33.770051  860437 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 11:53:33.780286  860437 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 11:53:33.793085  860437 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 11:53:33.793168  860437 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 11:53:33.807380  860437 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 11:53:33.887264  860437 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 11:53:33.887340  860437 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 11:53:34.005947  860437 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 11:53:34.006068  860437 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 11:53:34.006161  860437 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 11:53:34.227305  860437 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 11:53:34.343888  860437 out.go:204]   - Generating certificates and keys ...
	I0429 11:53:34.344066  860437 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 11:53:34.344171  860437 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 11:53:34.489402  860437 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 11:53:34.593755  860437 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 11:53:34.739506  860437 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 11:53:35.058355  860437 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 11:53:35.172810  860437 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 11:53:35.172967  860437 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-399337 localhost] and IPs [192.168.39.246 127.0.0.1 ::1]
	I0429 11:53:35.664336  860437 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 11:53:35.664558  860437 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-399337 localhost] and IPs [192.168.39.246 127.0.0.1 ::1]
	I0429 11:53:35.881815  860437 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 11:53:35.953322  860437 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 11:53:36.194494  860437 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 11:53:36.194793  860437 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 11:53:36.448932  860437 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 11:53:36.711236  860437 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 11:53:37.126901  860437 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 11:53:37.290354  860437 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 11:53:37.463945  860437 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 11:53:37.464750  860437 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 11:53:37.467441  860437 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 11:53:37.471376  860437 out.go:204]   - Booting up control plane ...
	I0429 11:53:37.471514  860437 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 11:53:37.471628  860437 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 11:53:37.471713  860437 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 11:53:37.495248  860437 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 11:53:37.496304  860437 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 11:53:37.496396  860437 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 11:53:37.638096  860437 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 11:53:37.638183  860437 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 11:53:38.138451  860437 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 500.979724ms
	I0429 11:53:38.138568  860437 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 11:53:43.138469  860437 kubeadm.go:309] [api-check] The API server is healthy after 5.002241539s
	I0429 11:53:43.150495  860437 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 11:53:43.168202  860437 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 11:53:43.203966  860437 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 11:53:43.204226  860437 kubeadm.go:309] [mark-control-plane] Marking the node addons-399337 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 11:53:43.218227  860437 kubeadm.go:309] [bootstrap-token] Using token: ipxj8m.v69ia1xa8cr83a1b
	I0429 11:53:43.219721  860437 out.go:204]   - Configuring RBAC rules ...
	I0429 11:53:43.219838  860437 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 11:53:43.226921  860437 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 11:53:43.234274  860437 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 11:53:43.237811  860437 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 11:53:43.241205  860437 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 11:53:43.247517  860437 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 11:53:43.545592  860437 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 11:53:43.983244  860437 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 11:53:44.544882  860437 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 11:53:44.545968  860437 kubeadm.go:309] 
	I0429 11:53:44.546074  860437 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 11:53:44.546111  860437 kubeadm.go:309] 
	I0429 11:53:44.546229  860437 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 11:53:44.546242  860437 kubeadm.go:309] 
	I0429 11:53:44.546286  860437 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 11:53:44.546409  860437 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 11:53:44.546500  860437 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 11:53:44.546522  860437 kubeadm.go:309] 
	I0429 11:53:44.546609  860437 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 11:53:44.546618  860437 kubeadm.go:309] 
	I0429 11:53:44.546688  860437 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 11:53:44.546698  860437 kubeadm.go:309] 
	I0429 11:53:44.546770  860437 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 11:53:44.546850  860437 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 11:53:44.546913  860437 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 11:53:44.546921  860437 kubeadm.go:309] 
	I0429 11:53:44.546992  860437 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 11:53:44.547063  860437 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 11:53:44.547070  860437 kubeadm.go:309] 
	I0429 11:53:44.547139  860437 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ipxj8m.v69ia1xa8cr83a1b \
	I0429 11:53:44.547242  860437 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d07253a1724b8327a490c03bfad6cee73ba9abb9c7824dbd702704d0cbe8cd8b \
	I0429 11:53:44.547280  860437 kubeadm.go:309] 	--control-plane 
	I0429 11:53:44.547287  860437 kubeadm.go:309] 
	I0429 11:53:44.547371  860437 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 11:53:44.547384  860437 kubeadm.go:309] 
	I0429 11:53:44.547452  860437 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ipxj8m.v69ia1xa8cr83a1b \
	I0429 11:53:44.547565  860437 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:d07253a1724b8327a490c03bfad6cee73ba9abb9c7824dbd702704d0cbe8cd8b 
	I0429 11:53:44.548458  860437 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 11:53:44.548513  860437 cni.go:84] Creating CNI manager for ""
	I0429 11:53:44.548550  860437 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0429 11:53:44.550245  860437 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 11:53:44.551393  860437 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 11:53:44.562736  860437 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 11:53:44.581468  860437 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 11:53:44.581595  860437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:53:44.581655  860437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-399337 minikube.k8s.io/updated_at=2024_04_29T11_53_44_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=ac56d99acb3fbd2d6010a41c69273a14230e0332 minikube.k8s.io/name=addons-399337 minikube.k8s.io/primary=true
	I0429 11:53:44.695929  860437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:53:44.727149  860437 ops.go:34] apiserver oom_adj: -16
	I0429 11:53:45.196789  860437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:53:45.696253  860437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:53:46.196946  860437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:53:46.696946  860437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:53:47.196389  860437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:53:47.696060  860437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:53:48.196278  860437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:53:48.696110  860437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:53:49.196291  860437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:53:49.696611  860437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:53:50.196182  860437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:53:50.696803  860437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:53:51.196248  860437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:53:51.696600  860437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:53:52.196197  860437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:53:52.696107  860437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:53:53.196326  860437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:53:53.696619  860437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:53:54.197010  860437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:53:54.696928  860437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:53:55.196904  860437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:53:55.696753  860437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:53:56.196481  860437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:53:56.696416  860437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:53:57.196904  860437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:53:57.696759  860437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:53:57.776122  860437 kubeadm.go:1107] duration metric: took 13.194576337s to wait for elevateKubeSystemPrivileges
	W0429 11:53:57.776180  860437 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 11:53:57.776193  860437 kubeadm.go:393] duration metric: took 24.195413297s to StartCluster
	I0429 11:53:57.776219  860437 settings.go:142] acquiring lock: {Name:mkd3bb726ace201c5e4071c1bb6ba1b789b2f489 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:53:57.776387  860437 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18773-852552/kubeconfig
	I0429 11:53:57.777748  860437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18773-852552/kubeconfig: {Name:mkd666a28c63e8d818d4da7cce1b5b76a87a9eab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:53:57.778366  860437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0429 11:53:57.778415  860437 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0429 11:53:57.780874  860437 out.go:177] * Verifying Kubernetes components...
	I0429 11:53:57.778553  860437 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0429 11:53:57.778827  860437 config.go:182] Loaded profile config "addons-399337": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0429 11:53:57.782817  860437 addons.go:69] Setting yakd=true in profile "addons-399337"
	I0429 11:53:57.782850  860437 addons.go:69] Setting helm-tiller=true in profile "addons-399337"
	I0429 11:53:57.782826  860437 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-399337"
	I0429 11:53:57.782862  860437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:53:57.782885  860437 addons.go:69] Setting ingress-dns=true in profile "addons-399337"
	I0429 11:53:57.782905  860437 addons.go:234] Setting addon yakd=true in "addons-399337"
	I0429 11:53:57.782920  860437 addons.go:234] Setting addon helm-tiller=true in "addons-399337"
	I0429 11:53:57.782921  860437 addons.go:69] Setting registry=true in profile "addons-399337"
	I0429 11:53:57.782946  860437 addons.go:234] Setting addon ingress-dns=true in "addons-399337"
	I0429 11:53:57.782963  860437 addons.go:234] Setting addon registry=true in "addons-399337"
	I0429 11:53:57.782984  860437 host.go:66] Checking if "addons-399337" exists ...
	I0429 11:53:57.782992  860437 addons.go:69] Setting default-storageclass=true in profile "addons-399337"
	I0429 11:53:57.783010  860437 host.go:66] Checking if "addons-399337" exists ...
	I0429 11:53:57.783016  860437 addons.go:69] Setting metrics-server=true in profile "addons-399337"
	I0429 11:53:57.782853  860437 addons.go:69] Setting inspektor-gadget=true in profile "addons-399337"
	I0429 11:53:57.783041  860437 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-399337"
	I0429 11:53:57.783048  860437 addons.go:234] Setting addon metrics-server=true in "addons-399337"
	I0429 11:53:57.783051  860437 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-399337"
	I0429 11:53:57.783057  860437 addons.go:234] Setting addon inspektor-gadget=true in "addons-399337"
	I0429 11:53:57.783083  860437 host.go:66] Checking if "addons-399337" exists ...
	I0429 11:53:57.783096  860437 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-399337"
	I0429 11:53:57.783122  860437 host.go:66] Checking if "addons-399337" exists ...
	I0429 11:53:57.782985  860437 host.go:66] Checking if "addons-399337" exists ...
	I0429 11:53:57.783018  860437 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-399337"
	I0429 11:53:57.783004  860437 host.go:66] Checking if "addons-399337" exists ...
	I0429 11:53:57.784132  860437 addons.go:69] Setting storage-provisioner=true in profile "addons-399337"
	I0429 11:53:57.784186  860437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 11:53:57.784219  860437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 11:53:57.784224  860437 addons.go:69] Setting gcp-auth=true in profile "addons-399337"
	I0429 11:53:57.784260  860437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:53:57.784272  860437 mustload.go:65] Loading cluster: addons-399337
	I0429 11:53:57.784291  860437 addons.go:234] Setting addon storage-provisioner=true in "addons-399337"
	I0429 11:53:57.784303  860437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:53:57.784323  860437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 11:53:57.784421  860437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:53:57.784288  860437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 11:53:57.784202  860437 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-399337"
	I0429 11:53:57.784171  860437 host.go:66] Checking if "addons-399337" exists ...
	I0429 11:53:57.784541  860437 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-399337"
	I0429 11:53:57.784628  860437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:53:57.784681  860437 config.go:182] Loaded profile config "addons-399337": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0429 11:53:57.782874  860437 addons.go:69] Setting ingress=true in profile "addons-399337"
	I0429 11:53:57.784820  860437 addons.go:234] Setting addon ingress=true in "addons-399337"
	I0429 11:53:57.784873  860437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 11:53:57.784163  860437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 11:53:57.784932  860437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:53:57.784974  860437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:53:57.785071  860437 host.go:66] Checking if "addons-399337" exists ...
	I0429 11:53:57.782826  860437 addons.go:69] Setting cloud-spanner=true in profile "addons-399337"
	I0429 11:53:57.785112  860437 addons.go:69] Setting volumesnapshots=true in profile "addons-399337"
	I0429 11:53:57.784477  860437 host.go:66] Checking if "addons-399337" exists ...
	I0429 11:53:57.785159  860437 addons.go:234] Setting addon cloud-spanner=true in "addons-399337"
	I0429 11:53:57.785159  860437 addons.go:234] Setting addon volumesnapshots=true in "addons-399337"
	I0429 11:53:57.785313  860437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 11:53:57.785316  860437 host.go:66] Checking if "addons-399337" exists ...
	I0429 11:53:57.785362  860437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 11:53:57.785379  860437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 11:53:57.785385  860437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:53:57.785397  860437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:53:57.785364  860437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:53:57.785466  860437 host.go:66] Checking if "addons-399337" exists ...
	I0429 11:53:57.785468  860437 host.go:66] Checking if "addons-399337" exists ...
	I0429 11:53:57.785692  860437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 11:53:57.785710  860437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 11:53:57.785727  860437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:53:57.785732  860437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:53:57.785770  860437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 11:53:57.785815  860437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:53:57.785838  860437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 11:53:57.785858  860437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 11:53:57.785879  860437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:53:57.785892  860437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:53:57.785835  860437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 11:53:57.786373  860437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:53:57.806163  860437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46441
	I0429 11:53:57.806172  860437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46419
	I0429 11:53:57.806816  860437 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:53:57.806933  860437 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:53:57.807392  860437 main.go:141] libmachine: Using API Version  1
	I0429 11:53:57.807415  860437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:53:57.807513  860437 main.go:141] libmachine: Using API Version  1
	I0429 11:53:57.807529  860437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:53:57.807768  860437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:53:57.807972  860437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:53:57.808355  860437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 11:53:57.808403  860437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:53:57.808553  860437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 11:53:57.808586  860437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:53:57.815224  860437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40425
	I0429 11:53:57.815451  860437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32921
	I0429 11:53:57.815888  860437 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:53:57.816060  860437 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:53:57.816143  860437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40415
	I0429 11:53:57.816569  860437 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:53:57.816783  860437 main.go:141] libmachine: Using API Version  1
	I0429 11:53:57.816796  860437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:53:57.816950  860437 main.go:141] libmachine: Using API Version  1
	I0429 11:53:57.816972  860437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:53:57.817300  860437 main.go:141] libmachine: Using API Version  1
	I0429 11:53:57.817335  860437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:53:57.817355  860437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:53:57.817388  860437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:53:57.817689  860437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:53:57.817887  860437 main.go:141] libmachine: (addons-399337) Calling .GetState
	I0429 11:53:57.818177  860437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 11:53:57.818220  860437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:53:57.818994  860437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 11:53:57.819035  860437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:53:57.819912  860437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43211
	I0429 11:53:57.821789  860437 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-399337"
	I0429 11:53:57.821830  860437 host.go:66] Checking if "addons-399337" exists ...
	I0429 11:53:57.822196  860437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 11:53:57.822226  860437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:53:57.825548  860437 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:53:57.826115  860437 main.go:141] libmachine: Using API Version  1
	I0429 11:53:57.826133  860437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:53:57.826560  860437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:53:57.826945  860437 main.go:141] libmachine: (addons-399337) Calling .GetState
	I0429 11:53:57.827161  860437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46747
	I0429 11:53:57.827451  860437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40559
	I0429 11:53:57.827587  860437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32815
	I0429 11:53:57.827895  860437 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:53:57.828100  860437 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:53:57.828582  860437 main.go:141] libmachine: Using API Version  1
	I0429 11:53:57.828595  860437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:53:57.828949  860437 host.go:66] Checking if "addons-399337" exists ...
	I0429 11:53:57.829305  860437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 11:53:57.829327  860437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:53:57.830092  860437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:53:57.830192  860437 main.go:141] libmachine: Using API Version  1
	I0429 11:53:57.830214  860437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:53:57.830312  860437 main.go:141] libmachine: (addons-399337) Calling .GetState
	I0429 11:53:57.830600  860437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:53:57.830627  860437 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:53:57.831266  860437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 11:53:57.831292  860437 main.go:141] libmachine: Using API Version  1
	I0429 11:53:57.831315  860437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:53:57.831343  860437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44755
	I0429 11:53:57.831558  860437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:53:57.831632  860437 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:53:57.832470  860437 main.go:141] libmachine: Using API Version  1
	I0429 11:53:57.832489  860437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:53:57.832905  860437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:53:57.834188  860437 addons.go:234] Setting addon default-storageclass=true in "addons-399337"
	I0429 11:53:57.834234  860437 host.go:66] Checking if "addons-399337" exists ...
	I0429 11:53:57.834270  860437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:53:57.834614  860437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 11:53:57.834647  860437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:53:57.834859  860437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 11:53:57.834898  860437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:53:57.835446  860437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 11:53:57.835487  860437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:53:57.843426  860437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43599
	I0429 11:53:57.843982  860437 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:53:57.844590  860437 main.go:141] libmachine: Using API Version  1
	I0429 11:53:57.844622  860437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:53:57.845041  860437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:53:57.845633  860437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 11:53:57.845686  860437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:53:57.854250  860437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33499
	I0429 11:53:57.854797  860437 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:53:57.855395  860437 main.go:141] libmachine: Using API Version  1
	I0429 11:53:57.855418  860437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:53:57.855851  860437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:53:57.856166  860437 main.go:141] libmachine: (addons-399337) Calling .GetState
	I0429 11:53:57.856167  860437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46103
	I0429 11:53:57.856969  860437 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:53:57.857716  860437 main.go:141] libmachine: Using API Version  1
	I0429 11:53:57.857747  860437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:53:57.858276  860437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:53:57.859025  860437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 11:53:57.859065  860437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:53:57.859286  860437 main.go:141] libmachine: (addons-399337) Calling .DriverName
	I0429 11:53:57.861754  860437 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0429 11:53:57.860217  860437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40887
	I0429 11:53:57.863271  860437 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0429 11:53:57.863284  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0429 11:53:57.863307  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHHostname
	I0429 11:53:57.867234  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:57.867682  860437 main.go:141] libmachine: (addons-399337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:e6", ip: ""} in network mk-addons-399337: {Iface:virbr1 ExpiryTime:2024-04-29 12:53:14 +0000 UTC Type:0 Mac:52:54:00:eb:57:e6 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:addons-399337 Clientid:01:52:54:00:eb:57:e6}
	I0429 11:53:57.867707  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined IP address 192.168.39.246 and MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:57.867745  860437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33455
	I0429 11:53:57.868052  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHPort
	I0429 11:53:57.868260  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHKeyPath
	I0429 11:53:57.868280  860437 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:53:57.868728  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHUsername
	I0429 11:53:57.868800  860437 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:53:57.868828  860437 main.go:141] libmachine: Using API Version  1
	I0429 11:53:57.868846  860437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:53:57.869232  860437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:53:57.869391  860437 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-852552/.minikube/machines/addons-399337/id_rsa Username:docker}
	I0429 11:53:57.869933  860437 main.go:141] libmachine: (addons-399337) Calling .GetState
	I0429 11:53:57.871847  860437 main.go:141] libmachine: (addons-399337) Calling .DriverName
	I0429 11:53:57.873971  860437 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.16
	I0429 11:53:57.872873  860437 main.go:141] libmachine: Using API Version  1
	I0429 11:53:57.875288  860437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39615
	I0429 11:53:57.875719  860437 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0429 11:53:57.875737  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0429 11:53:57.875757  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHHostname
	I0429 11:53:57.875818  860437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:53:57.877381  860437 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:53:57.878165  860437 main.go:141] libmachine: Using API Version  1
	I0429 11:53:57.878184  860437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:53:57.878192  860437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:53:57.879097  860437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:53:57.879704  860437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42479
	I0429 11:53:57.879826  860437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 11:53:57.879867  860437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:53:57.880203  860437 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:53:57.880475  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:57.880561  860437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39437
	I0429 11:53:57.880830  860437 main.go:141] libmachine: (addons-399337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:e6", ip: ""} in network mk-addons-399337: {Iface:virbr1 ExpiryTime:2024-04-29 12:53:14 +0000 UTC Type:0 Mac:52:54:00:eb:57:e6 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:addons-399337 Clientid:01:52:54:00:eb:57:e6}
	I0429 11:53:57.880852  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined IP address 192.168.39.246 and MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:57.881061  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHPort
	I0429 11:53:57.881247  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHKeyPath
	I0429 11:53:57.881389  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHUsername
	I0429 11:53:57.881506  860437 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-852552/.minikube/machines/addons-399337/id_rsa Username:docker}
	I0429 11:53:57.882181  860437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39439
	I0429 11:53:57.882335  860437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35601
	I0429 11:53:57.882526  860437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39773
	I0429 11:53:57.882737  860437 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:53:57.882857  860437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 11:53:57.882918  860437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:53:57.883296  860437 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:53:57.883390  860437 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:53:57.883445  860437 main.go:141] libmachine: Using API Version  1
	I0429 11:53:57.883463  860437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:53:57.883591  860437 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:53:57.883845  860437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:53:57.884416  860437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 11:53:57.884456  860437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:53:57.884794  860437 main.go:141] libmachine: Using API Version  1
	I0429 11:53:57.884813  860437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:53:57.884939  860437 main.go:141] libmachine: Using API Version  1
	I0429 11:53:57.884949  860437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:53:57.885074  860437 main.go:141] libmachine: Using API Version  1
	I0429 11:53:57.885084  860437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:53:57.885192  860437 main.go:141] libmachine: Using API Version  1
	I0429 11:53:57.885201  860437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:53:57.885480  860437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:53:57.885739  860437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:53:57.885816  860437 main.go:141] libmachine: (addons-399337) Calling .GetState
	I0429 11:53:57.887592  860437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:53:57.887677  860437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:53:57.888529  860437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 11:53:57.888577  860437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:53:57.888809  860437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45703
	I0429 11:53:57.888841  860437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36291
	I0429 11:53:57.888963  860437 main.go:141] libmachine: (addons-399337) Calling .GetState
	I0429 11:53:57.889350  860437 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:53:57.890230  860437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 11:53:57.890270  860437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:53:57.890586  860437 main.go:141] libmachine: Using API Version  1
	I0429 11:53:57.890607  860437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:53:57.890890  860437 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:53:57.890996  860437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46191
	I0429 11:53:57.891262  860437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:53:57.891633  860437 main.go:141] libmachine: (addons-399337) Calling .GetState
	I0429 11:53:57.891637  860437 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:53:57.891744  860437 main.go:141] libmachine: (addons-399337) Calling .DriverName
	I0429 11:53:57.892145  860437 main.go:141] libmachine: Using API Version  1
	I0429 11:53:57.892160  860437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:53:57.892198  860437 main.go:141] libmachine: (addons-399337) Calling .DriverName
	I0429 11:53:57.892264  860437 main.go:141] libmachine: Using API Version  1
	I0429 11:53:57.892281  860437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:53:57.894072  860437 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0429 11:53:57.892515  860437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:53:57.892843  860437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:53:57.894536  860437 main.go:141] libmachine: (addons-399337) Calling .DriverName
	I0429 11:53:57.894570  860437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45891
	I0429 11:53:57.895650  860437 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0429 11:53:57.895668  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0429 11:53:57.895682  860437 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0429 11:53:57.897282  860437 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0429 11:53:57.897302  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0429 11:53:57.895690  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHHostname
	I0429 11:53:57.897323  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHHostname
	I0429 11:53:57.896015  860437 main.go:141] libmachine: (addons-399337) Calling .GetState
	I0429 11:53:57.898856  860437 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0429 11:53:57.896066  860437 main.go:141] libmachine: (addons-399337) Calling .DriverName
	I0429 11:53:57.896878  860437 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:53:57.900000  860437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38787
	I0429 11:53:57.901706  860437 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0429 11:53:57.900510  860437 main.go:141] libmachine: (addons-399337) Calling .DriverName
	I0429 11:53:57.900774  860437 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:53:57.901375  860437 main.go:141] libmachine: Using API Version  1
	I0429 11:53:57.902264  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:57.902995  860437 main.go:141] libmachine: (addons-399337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:e6", ip: ""} in network mk-addons-399337: {Iface:virbr1 ExpiryTime:2024-04-29 12:53:14 +0000 UTC Type:0 Mac:52:54:00:eb:57:e6 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:addons-399337 Clientid:01:52:54:00:eb:57:e6}
	I0429 11:53:57.903015  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined IP address 192.168.39.246 and MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:57.904364  860437 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0429 11:53:57.903124  860437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:53:57.902852  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:57.903229  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHPort
	I0429 11:53:57.903682  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHPort
	I0429 11:53:57.903802  860437 main.go:141] libmachine: Using API Version  1
	I0429 11:53:57.904942  860437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37321
	I0429 11:53:57.905623  860437 main.go:141] libmachine: (addons-399337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:e6", ip: ""} in network mk-addons-399337: {Iface:virbr1 ExpiryTime:2024-04-29 12:53:14 +0000 UTC Type:0 Mac:52:54:00:eb:57:e6 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:addons-399337 Clientid:01:52:54:00:eb:57:e6}
	I0429 11:53:57.905922  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHKeyPath
	I0429 11:53:57.905944  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHKeyPath
	I0429 11:53:57.906077  860437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:53:57.906970  860437 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0429 11:53:57.907099  860437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:53:57.907172  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined IP address 192.168.39.246 and MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:57.910737  860437 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:53:57.911256  860437 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0429 11:53:57.911592  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHUsername
	I0429 11:53:57.911594  860437 main.go:141] libmachine: (addons-399337) Calling .GetState
	I0429 11:53:57.911617  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHUsername
	I0429 11:53:57.911915  860437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43239
	I0429 11:53:57.911950  860437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:53:57.912108  860437 main.go:141] libmachine: Using API Version  1
	I0429 11:53:57.912892  860437 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0429 11:53:57.912910  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0429 11:53:57.912914  860437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:53:57.912932  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHHostname
	I0429 11:53:57.912700  860437 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0429 11:53:57.913181  860437 main.go:141] libmachine: (addons-399337) Calling .GetState
	I0429 11:53:57.913209  860437 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-852552/.minikube/machines/addons-399337/id_rsa Username:docker}
	I0429 11:53:57.913238  860437 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-852552/.minikube/machines/addons-399337/id_rsa Username:docker}
	I0429 11:53:57.913467  860437 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:53:57.913743  860437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:53:57.914358  860437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40385
	I0429 11:53:57.915155  860437 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0429 11:53:57.916967  860437 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0429 11:53:57.915214  860437 main.go:141] libmachine: (addons-399337) Calling .DriverName
	I0429 11:53:57.915834  860437 main.go:141] libmachine: Using API Version  1
	I0429 11:53:57.915956  860437 main.go:141] libmachine: (addons-399337) Calling .GetState
	I0429 11:53:57.916467  860437 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:53:57.917288  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:57.918291  860437 main.go:141] libmachine: (addons-399337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:e6", ip: ""} in network mk-addons-399337: {Iface:virbr1 ExpiryTime:2024-04-29 12:53:14 +0000 UTC Type:0 Mac:52:54:00:eb:57:e6 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:addons-399337 Clientid:01:52:54:00:eb:57:e6}
	I0429 11:53:57.918293  860437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:53:57.918313  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined IP address 192.168.39.246 and MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:57.917325  860437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39799
	I0429 11:53:57.917359  860437 main.go:141] libmachine: (addons-399337) Calling .DriverName
	I0429 11:53:57.920264  860437 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0429 11:53:57.918055  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHPort
	I0429 11:53:57.919158  860437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:53:57.919276  860437 main.go:141] libmachine: Using API Version  1
	I0429 11:53:57.919760  860437 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:53:57.920048  860437 main.go:141] libmachine: (addons-399337) Calling .DriverName
	I0429 11:53:57.921754  860437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:53:57.921803  860437 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0429 11:53:57.922201  860437 main.go:141] libmachine: (addons-399337) Calling .GetState
	I0429 11:53:57.922222  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHKeyPath
	I0429 11:53:57.922956  860437 out.go:177]   - Using image docker.io/registry:2.8.3
	I0429 11:53:57.923163  860437 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 11:53:57.923241  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0429 11:53:57.923543  860437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:53:57.923681  860437 main.go:141] libmachine: Using API Version  1
	I0429 11:53:57.923705  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHUsername
	I0429 11:53:57.923914  860437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37829
	I0429 11:53:57.925513  860437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37211
	I0429 11:53:57.925880  860437 main.go:141] libmachine: (addons-399337) Calling .DriverName
	I0429 11:53:57.926349  860437 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0429 11:53:57.927353  860437 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0429 11:53:57.928759  860437 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0429 11:53:57.930533  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0429 11:53:57.930570  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHHostname
	I0429 11:53:57.927470  860437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:53:57.927540  860437 main.go:141] libmachine: (addons-399337) Calling .GetState
	I0429 11:53:57.932258  860437 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 11:53:57.932277  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 11:53:57.932294  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHHostname
	I0429 11:53:57.927986  860437 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-852552/.minikube/machines/addons-399337/id_rsa Username:docker}
	I0429 11:53:57.928053  860437 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 11:53:57.932657  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 11:53:57.932678  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHHostname
	I0429 11:53:57.928077  860437 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:53:57.928087  860437 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:53:57.926382  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHHostname
	I0429 11:53:57.930517  860437 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0429 11:53:57.932882  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0429 11:53:57.932914  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHHostname
	I0429 11:53:57.933601  860437 main.go:141] libmachine: Using API Version  1
	I0429 11:53:57.933624  860437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:53:57.933789  860437 main.go:141] libmachine: (addons-399337) Calling .DriverName
	I0429 11:53:57.933867  860437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:53:57.936464  860437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:53:57.937455  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:57.937500  860437 main.go:141] libmachine: (addons-399337) Calling .GetState
	I0429 11:53:57.937540  860437 main.go:141] libmachine: (addons-399337) Calling .GetState
	I0429 11:53:57.937611  860437 main.go:141] libmachine: Using API Version  1
	I0429 11:53:57.939939  860437 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0429 11:53:57.937656  860437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:53:57.938767  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:57.938797  860437 main.go:141] libmachine: (addons-399337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:e6", ip: ""} in network mk-addons-399337: {Iface:virbr1 ExpiryTime:2024-04-29 12:53:14 +0000 UTC Type:0 Mac:52:54:00:eb:57:e6 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:addons-399337 Clientid:01:52:54:00:eb:57:e6}
	I0429 11:53:57.939173  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHPort
	I0429 11:53:57.939278  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:57.940126  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:57.940160  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHPort
	I0429 11:53:57.940251  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHPort
	I0429 11:53:57.940302  860437 main.go:141] libmachine: (addons-399337) Calling .DriverName
	I0429 11:53:57.940644  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHPort
	I0429 11:53:57.941078  860437 main.go:141] libmachine: (addons-399337) Calling .DriverName
	I0429 11:53:57.941356  860437 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0429 11:53:57.941364  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0429 11:53:57.941375  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHHostname
	I0429 11:53:57.941414  860437 main.go:141] libmachine: (addons-399337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:e6", ip: ""} in network mk-addons-399337: {Iface:virbr1 ExpiryTime:2024-04-29 12:53:14 +0000 UTC Type:0 Mac:52:54:00:eb:57:e6 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:addons-399337 Clientid:01:52:54:00:eb:57:e6}
	I0429 11:53:57.941427  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined IP address 192.168.39.246 and MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:57.941436  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined IP address 192.168.39.246 and MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:57.941463  860437 main.go:141] libmachine: (addons-399337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:e6", ip: ""} in network mk-addons-399337: {Iface:virbr1 ExpiryTime:2024-04-29 12:53:14 +0000 UTC Type:0 Mac:52:54:00:eb:57:e6 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:addons-399337 Clientid:01:52:54:00:eb:57:e6}
	I0429 11:53:57.941474  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined IP address 192.168.39.246 and MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:57.941487  860437 main.go:141] libmachine: (addons-399337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:e6", ip: ""} in network mk-addons-399337: {Iface:virbr1 ExpiryTime:2024-04-29 12:53:14 +0000 UTC Type:0 Mac:52:54:00:eb:57:e6 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:addons-399337 Clientid:01:52:54:00:eb:57:e6}
	I0429 11:53:57.941496  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined IP address 192.168.39.246 and MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:57.941535  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHKeyPath
	I0429 11:53:57.943086  860437 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0429 11:53:57.941561  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:57.942723  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHPort
	I0429 11:53:57.942746  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHUsername
	I0429 11:53:57.942748  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHKeyPath
	I0429 11:53:57.942766  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHKeyPath
	I0429 11:53:57.942767  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHKeyPath
	I0429 11:53:57.942803  860437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:53:57.945598  860437 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0429 11:53:57.944605  860437 main.go:141] libmachine: (addons-399337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:e6", ip: ""} in network mk-addons-399337: {Iface:virbr1 ExpiryTime:2024-04-29 12:53:14 +0000 UTC Type:0 Mac:52:54:00:eb:57:e6 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:addons-399337 Clientid:01:52:54:00:eb:57:e6}
	I0429 11:53:57.944691  860437 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0429 11:53:57.944731  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHUsername
	I0429 11:53:57.944748  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHUsername
	I0429 11:53:57.944768  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:57.945004  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHKeyPath
	I0429 11:53:57.945027  860437 main.go:141] libmachine: (addons-399337) Calling .GetState
	I0429 11:53:57.945039  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHUsername
	I0429 11:53:57.945106  860437 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-852552/.minikube/machines/addons-399337/id_rsa Username:docker}
	I0429 11:53:57.945255  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHPort
	I0429 11:53:57.946738  860437 main.go:141] libmachine: (addons-399337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:e6", ip: ""} in network mk-addons-399337: {Iface:virbr1 ExpiryTime:2024-04-29 12:53:14 +0000 UTC Type:0 Mac:52:54:00:eb:57:e6 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:addons-399337 Clientid:01:52:54:00:eb:57:e6}
	I0429 11:53:57.946771  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined IP address 192.168.39.246 and MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:57.946975  860437 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-852552/.minikube/machines/addons-399337/id_rsa Username:docker}
	I0429 11:53:57.947125  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHUsername
	I0429 11:53:57.947179  860437 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-852552/.minikube/machines/addons-399337/id_rsa Username:docker}
	I0429 11:53:57.947303  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHKeyPath
	I0429 11:53:57.947391  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0429 11:53:57.947408  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHHostname
	I0429 11:53:57.947426  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined IP address 192.168.39.246 and MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:57.947452  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHUsername
	I0429 11:53:57.947537  860437 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-852552/.minikube/machines/addons-399337/id_rsa Username:docker}
	I0429 11:53:57.947645  860437 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-852552/.minikube/machines/addons-399337/id_rsa Username:docker}
	I0429 11:53:57.947759  860437 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-852552/.minikube/machines/addons-399337/id_rsa Username:docker}
	I0429 11:53:57.949266  860437 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0429 11:53:57.949095  860437 main.go:141] libmachine: (addons-399337) Calling .DriverName
	I0429 11:53:57.950511  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:57.950583  860437 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0429 11:53:57.951955  860437 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0429 11:53:57.950684  860437 main.go:141] libmachine: (addons-399337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:e6", ip: ""} in network mk-addons-399337: {Iface:virbr1 ExpiryTime:2024-04-29 12:53:14 +0000 UTC Type:0 Mac:52:54:00:eb:57:e6 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:addons-399337 Clientid:01:52:54:00:eb:57:e6}
	I0429 11:53:57.950915  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHPort
	I0429 11:53:57.952097  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0429 11:53:57.952136  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHHostname
	I0429 11:53:57.952198  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined IP address 192.168.39.246 and MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:57.952261  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHKeyPath
	I0429 11:53:57.952439  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHUsername
	I0429 11:53:57.952656  860437 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-852552/.minikube/machines/addons-399337/id_rsa Username:docker}
	I0429 11:53:57.955132  860437 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0429 11:53:57.954880  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:57.955302  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHPort
	I0429 11:53:57.956619  860437 out.go:177]   - Using image docker.io/busybox:stable
	I0429 11:53:57.956719  860437 main.go:141] libmachine: (addons-399337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:e6", ip: ""} in network mk-addons-399337: {Iface:virbr1 ExpiryTime:2024-04-29 12:53:14 +0000 UTC Type:0 Mac:52:54:00:eb:57:e6 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:addons-399337 Clientid:01:52:54:00:eb:57:e6}
	I0429 11:53:57.958064  860437 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0429 11:53:57.958079  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0429 11:53:57.958126  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHHostname
	I0429 11:53:57.958081  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined IP address 192.168.39.246 and MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:57.956971  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHKeyPath
	I0429 11:53:57.958337  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHUsername
	I0429 11:53:57.958473  860437 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-852552/.minikube/machines/addons-399337/id_rsa Username:docker}
	W0429 11:53:57.959490  860437 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:40320->192.168.39.246:22: read: connection reset by peer
	I0429 11:53:57.959523  860437 retry.go:31] will retry after 333.762074ms: ssh: handshake failed: read tcp 192.168.39.1:40320->192.168.39.246:22: read: connection reset by peer
	I0429 11:53:57.961317  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:57.961733  860437 main.go:141] libmachine: (addons-399337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:e6", ip: ""} in network mk-addons-399337: {Iface:virbr1 ExpiryTime:2024-04-29 12:53:14 +0000 UTC Type:0 Mac:52:54:00:eb:57:e6 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:addons-399337 Clientid:01:52:54:00:eb:57:e6}
	I0429 11:53:57.961757  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined IP address 192.168.39.246 and MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:53:57.961926  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHPort
	I0429 11:53:57.962127  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHKeyPath
	I0429 11:53:57.962326  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHUsername
	I0429 11:53:57.962476  860437 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-852552/.minikube/machines/addons-399337/id_rsa Username:docker}
	W0429 11:53:57.972061  860437 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:40336->192.168.39.246:22: read: connection reset by peer
	I0429 11:53:57.972085  860437 retry.go:31] will retry after 323.931661ms: ssh: handshake failed: read tcp 192.168.39.1:40336->192.168.39.246:22: read: connection reset by peer
	I0429 11:53:58.176714  860437 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0429 11:53:58.176750  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0429 11:53:58.277534  860437 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 11:53:58.278238  860437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0429 11:53:58.365003  860437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 11:53:58.383418  860437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 11:53:58.459081  860437 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0429 11:53:58.459123  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0429 11:53:58.474197  860437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0429 11:53:58.492287  860437 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0429 11:53:58.492316  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0429 11:53:58.501737  860437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0429 11:53:58.509190  860437 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0429 11:53:58.509217  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0429 11:53:58.517876  860437 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0429 11:53:58.517902  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0429 11:53:58.537923  860437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0429 11:53:58.541765  860437 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0429 11:53:58.541790  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0429 11:53:58.542992  860437 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0429 11:53:58.543016  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0429 11:53:58.573019  860437 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0429 11:53:58.573054  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0429 11:53:58.674960  860437 node_ready.go:35] waiting up to 6m0s for node "addons-399337" to be "Ready" ...
	I0429 11:53:58.679982  860437 node_ready.go:49] node "addons-399337" has status "Ready":"True"
	I0429 11:53:58.680021  860437 node_ready.go:38] duration metric: took 5.003452ms for node "addons-399337" to be "Ready" ...
	I0429 11:53:58.680035  860437 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 11:53:58.696062  860437 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-clhv5" in "kube-system" namespace to be "Ready" ...
	I0429 11:53:58.839825  860437 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0429 11:53:58.839852  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0429 11:53:58.866719  860437 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0429 11:53:58.866757  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0429 11:53:58.871071  860437 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0429 11:53:58.871091  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0429 11:53:58.882022  860437 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0429 11:53:58.882053  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0429 11:53:58.900260  860437 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0429 11:53:58.900290  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0429 11:53:58.927188  860437 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0429 11:53:58.927214  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0429 11:53:58.935191  860437 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0429 11:53:58.935218  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0429 11:53:59.332615  860437 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0429 11:53:59.332646  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0429 11:53:59.335329  860437 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 11:53:59.335348  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0429 11:53:59.337605  860437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0429 11:53:59.351117  860437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0429 11:53:59.357620  860437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0429 11:53:59.396586  860437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0429 11:53:59.414969  860437 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0429 11:53:59.415004  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0429 11:53:59.440621  860437 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0429 11:53:59.440649  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0429 11:53:59.451180  860437 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0429 11:53:59.451206  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0429 11:53:59.518871  860437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 11:53:59.618068  860437 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0429 11:53:59.618099  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0429 11:53:59.634287  860437 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0429 11:53:59.634317  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0429 11:53:59.655454  860437 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0429 11:53:59.655487  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0429 11:53:59.674306  860437 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0429 11:53:59.674340  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0429 11:53:59.734250  860437 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0429 11:53:59.734337  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0429 11:53:59.751079  860437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0429 11:53:59.778350  860437 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0429 11:53:59.778391  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0429 11:53:59.854934  860437 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0429 11:53:59.854972  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0429 11:54:00.057034  860437 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0429 11:54:00.057071  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0429 11:54:00.088941  860437 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0429 11:54:00.088968  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0429 11:54:00.259655  860437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0429 11:54:00.264850  860437 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0429 11:54:00.264881  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0429 11:54:00.275907  860437 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0429 11:54:00.275934  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0429 11:54:00.471414  860437 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0429 11:54:00.471437  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0429 11:54:00.542567  860437 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.264293276s)
	I0429 11:54:00.542601  860437 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0429 11:54:00.612143  860437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0429 11:54:00.702930  860437 pod_ready.go:102] pod "coredns-7db6d8ff4d-clhv5" in "kube-system" namespace has status "Ready":"False"
	I0429 11:54:00.831700  860437 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0429 11:54:00.831732  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0429 11:54:01.045832  860437 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-399337" context rescaled to 1 replicas
	I0429 11:54:01.062581  860437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0429 11:54:02.769839  860437 pod_ready.go:102] pod "coredns-7db6d8ff4d-clhv5" in "kube-system" namespace has status "Ready":"False"
	I0429 11:54:03.134258  860437 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.769203572s)
	I0429 11:54:03.134308  860437 main.go:141] libmachine: Making call to close driver server
	I0429 11:54:03.134322  860437 main.go:141] libmachine: (addons-399337) Calling .Close
	I0429 11:54:03.134335  860437 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.750880799s)
	I0429 11:54:03.134389  860437 main.go:141] libmachine: Making call to close driver server
	I0429 11:54:03.134401  860437 main.go:141] libmachine: (addons-399337) Calling .Close
	I0429 11:54:03.134433  860437 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.660204173s)
	I0429 11:54:03.134457  860437 main.go:141] libmachine: Making call to close driver server
	I0429 11:54:03.134465  860437 main.go:141] libmachine: (addons-399337) Calling .Close
	I0429 11:54:03.134508  860437 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.632737645s)
	I0429 11:54:03.134525  860437 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.596575716s)
	I0429 11:54:03.134536  860437 main.go:141] libmachine: Making call to close driver server
	I0429 11:54:03.134540  860437 main.go:141] libmachine: Making call to close driver server
	I0429 11:54:03.134547  860437 main.go:141] libmachine: (addons-399337) Calling .Close
	I0429 11:54:03.134549  860437 main.go:141] libmachine: (addons-399337) Calling .Close
	I0429 11:54:03.134636  860437 main.go:141] libmachine: (addons-399337) DBG | Closing plugin on server side
	I0429 11:54:03.134673  860437 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:54:03.134680  860437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:54:03.134688  860437 main.go:141] libmachine: Making call to close driver server
	I0429 11:54:03.134694  860437 main.go:141] libmachine: (addons-399337) Calling .Close
	I0429 11:54:03.134978  860437 main.go:141] libmachine: (addons-399337) DBG | Closing plugin on server side
	I0429 11:54:03.135018  860437 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:54:03.135025  860437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:54:03.135033  860437 main.go:141] libmachine: Making call to close driver server
	I0429 11:54:03.135043  860437 main.go:141] libmachine: (addons-399337) Calling .Close
	I0429 11:54:03.135090  860437 main.go:141] libmachine: (addons-399337) DBG | Closing plugin on server side
	I0429 11:54:03.135106  860437 main.go:141] libmachine: (addons-399337) DBG | Closing plugin on server side
	I0429 11:54:03.135125  860437 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:54:03.135131  860437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:54:03.135138  860437 main.go:141] libmachine: Making call to close driver server
	I0429 11:54:03.135144  860437 main.go:141] libmachine: (addons-399337) Calling .Close
	I0429 11:54:03.135343  860437 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:54:03.135351  860437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:54:03.135360  860437 main.go:141] libmachine: Making call to close driver server
	I0429 11:54:03.135366  860437 main.go:141] libmachine: (addons-399337) Calling .Close
	I0429 11:54:03.135725  860437 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:54:03.135741  860437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:54:03.136014  860437 main.go:141] libmachine: (addons-399337) DBG | Closing plugin on server side
	I0429 11:54:03.136056  860437 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:54:03.136062  860437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:54:03.136070  860437 main.go:141] libmachine: Making call to close driver server
	I0429 11:54:03.136077  860437 main.go:141] libmachine: (addons-399337) Calling .Close
	I0429 11:54:03.136887  860437 main.go:141] libmachine: (addons-399337) DBG | Closing plugin on server side
	I0429 11:54:03.136916  860437 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:54:03.136922  860437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:54:03.137048  860437 main.go:141] libmachine: (addons-399337) DBG | Closing plugin on server side
	I0429 11:54:03.137075  860437 main.go:141] libmachine: (addons-399337) DBG | Closing plugin on server side
	I0429 11:54:03.137096  860437 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:54:03.137105  860437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:54:03.137136  860437 main.go:141] libmachine: (addons-399337) DBG | Closing plugin on server side
	I0429 11:54:03.137153  860437 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:54:03.137159  860437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:54:03.137260  860437 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:54:03.137271  860437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:54:03.159357  860437 main.go:141] libmachine: Making call to close driver server
	I0429 11:54:03.159381  860437 main.go:141] libmachine: (addons-399337) Calling .Close
	I0429 11:54:03.159681  860437 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:54:03.159703  860437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:54:03.159729  860437 main.go:141] libmachine: (addons-399337) DBG | Closing plugin on server side
	I0429 11:54:03.981440  860437 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.643791146s)
	I0429 11:54:03.981514  860437 main.go:141] libmachine: Making call to close driver server
	I0429 11:54:03.981527  860437 main.go:141] libmachine: (addons-399337) Calling .Close
	I0429 11:54:03.981567  860437 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.630409751s)
	I0429 11:54:03.981619  860437 main.go:141] libmachine: Making call to close driver server
	I0429 11:54:03.981638  860437 main.go:141] libmachine: (addons-399337) Calling .Close
	I0429 11:54:03.981637  860437 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (4.623973189s)
	I0429 11:54:03.981675  860437 main.go:141] libmachine: Making call to close driver server
	I0429 11:54:03.981690  860437 main.go:141] libmachine: (addons-399337) Calling .Close
	I0429 11:54:03.981943  860437 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:54:03.981962  860437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:54:03.981989  860437 main.go:141] libmachine: Making call to close driver server
	I0429 11:54:03.981998  860437 main.go:141] libmachine: (addons-399337) Calling .Close
	I0429 11:54:03.982058  860437 main.go:141] libmachine: (addons-399337) DBG | Closing plugin on server side
	I0429 11:54:03.982090  860437 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:54:03.982098  860437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:54:03.982106  860437 main.go:141] libmachine: Making call to close driver server
	I0429 11:54:03.982113  860437 main.go:141] libmachine: (addons-399337) Calling .Close
	I0429 11:54:03.982132  860437 main.go:141] libmachine: (addons-399337) DBG | Closing plugin on server side
	I0429 11:54:03.981955  860437 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:54:03.982199  860437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:54:03.982221  860437 main.go:141] libmachine: Making call to close driver server
	I0429 11:54:03.982230  860437 main.go:141] libmachine: (addons-399337) Calling .Close
	I0429 11:54:03.982537  860437 main.go:141] libmachine: (addons-399337) DBG | Closing plugin on server side
	I0429 11:54:03.982549  860437 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:54:03.982563  860437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:54:03.982570  860437 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:54:03.982574  860437 addons.go:470] Verifying addon registry=true in "addons-399337"
	I0429 11:54:03.982578  860437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:54:03.984164  860437 out.go:177] * Verifying registry addon...
	I0429 11:54:03.982768  860437 main.go:141] libmachine: (addons-399337) DBG | Closing plugin on server side
	I0429 11:54:03.982809  860437 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:54:03.985351  860437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:54:03.986363  860437 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0429 11:54:04.047547  860437 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0429 11:54:04.047586  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:04.099421  860437 main.go:141] libmachine: Making call to close driver server
	I0429 11:54:04.099446  860437 main.go:141] libmachine: (addons-399337) Calling .Close
	I0429 11:54:04.099770  860437 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:54:04.099789  860437 main.go:141] libmachine: (addons-399337) DBG | Closing plugin on server side
	I0429 11:54:04.099795  860437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:54:04.530593  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:04.909665  860437 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0429 11:54:04.909766  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHHostname
	I0429 11:54:04.912590  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:54:04.913006  860437 main.go:141] libmachine: (addons-399337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:e6", ip: ""} in network mk-addons-399337: {Iface:virbr1 ExpiryTime:2024-04-29 12:53:14 +0000 UTC Type:0 Mac:52:54:00:eb:57:e6 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:addons-399337 Clientid:01:52:54:00:eb:57:e6}
	I0429 11:54:04.913039  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined IP address 192.168.39.246 and MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:54:04.913284  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHPort
	I0429 11:54:04.913478  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHKeyPath
	I0429 11:54:04.913710  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHUsername
	I0429 11:54:04.913895  860437 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-852552/.minikube/machines/addons-399337/id_rsa Username:docker}
	I0429 11:54:04.991150  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:05.154628  860437 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0429 11:54:05.280202  860437 pod_ready.go:102] pod "coredns-7db6d8ff4d-clhv5" in "kube-system" namespace has status "Ready":"False"
	I0429 11:54:05.463503  860437 addons.go:234] Setting addon gcp-auth=true in "addons-399337"
	I0429 11:54:05.463579  860437 host.go:66] Checking if "addons-399337" exists ...
	I0429 11:54:05.464057  860437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 11:54:05.464099  860437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:54:05.479983  860437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39829
	I0429 11:54:05.480585  860437 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:54:05.481186  860437 main.go:141] libmachine: Using API Version  1
	I0429 11:54:05.481215  860437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:54:05.481690  860437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:54:05.482261  860437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 11:54:05.482293  860437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 11:54:05.497345  860437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36205
	I0429 11:54:05.497915  860437 main.go:141] libmachine: () Calling .GetVersion
	I0429 11:54:05.498446  860437 main.go:141] libmachine: Using API Version  1
	I0429 11:54:05.498474  860437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 11:54:05.498813  860437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 11:54:05.499181  860437 main.go:141] libmachine: (addons-399337) Calling .GetState
	I0429 11:54:05.500740  860437 main.go:141] libmachine: (addons-399337) Calling .DriverName
	I0429 11:54:05.501013  860437 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0429 11:54:05.501040  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHHostname
	I0429 11:54:05.503980  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:54:05.504374  860437 main.go:141] libmachine: (addons-399337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:57:e6", ip: ""} in network mk-addons-399337: {Iface:virbr1 ExpiryTime:2024-04-29 12:53:14 +0000 UTC Type:0 Mac:52:54:00:eb:57:e6 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:addons-399337 Clientid:01:52:54:00:eb:57:e6}
	I0429 11:54:05.504405  860437 main.go:141] libmachine: (addons-399337) DBG | domain addons-399337 has defined IP address 192.168.39.246 and MAC address 52:54:00:eb:57:e6 in network mk-addons-399337
	I0429 11:54:05.504556  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHPort
	I0429 11:54:05.504753  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHKeyPath
	I0429 11:54:05.504937  860437 main.go:141] libmachine: (addons-399337) Calling .GetSSHUsername
	I0429 11:54:05.505130  860437 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-852552/.minikube/machines/addons-399337/id_rsa Username:docker}
	I0429 11:54:05.507209  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:05.992220  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:06.492886  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:07.025785  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:07.270682  860437 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.751762482s)
	I0429 11:54:07.270741  860437 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.519604489s)
	I0429 11:54:07.270761  860437 main.go:141] libmachine: Making call to close driver server
	I0429 11:54:07.270776  860437 main.go:141] libmachine: (addons-399337) Calling .Close
	I0429 11:54:07.270789  860437 main.go:141] libmachine: Making call to close driver server
	I0429 11:54:07.270806  860437 main.go:141] libmachine: (addons-399337) Calling .Close
	I0429 11:54:07.270858  860437 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.011153476s)
	W0429 11:54:07.270899  860437 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0429 11:54:07.270936  860437 retry.go:31] will retry after 296.126977ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0429 11:54:07.270952  860437 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.658761858s)
	I0429 11:54:07.270995  860437 main.go:141] libmachine: Making call to close driver server
	I0429 11:54:07.271011  860437 main.go:141] libmachine: (addons-399337) Calling .Close
	I0429 11:54:07.271239  860437 main.go:141] libmachine: (addons-399337) DBG | Closing plugin on server side
	I0429 11:54:07.271272  860437 main.go:141] libmachine: (addons-399337) DBG | Closing plugin on server side
	I0429 11:54:07.271310  860437 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:54:07.271323  860437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:54:07.271332  860437 main.go:141] libmachine: Making call to close driver server
	I0429 11:54:07.271342  860437 main.go:141] libmachine: (addons-399337) Calling .Close
	I0429 11:54:07.271431  860437 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:54:07.273052  860437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:54:07.273069  860437 main.go:141] libmachine: Making call to close driver server
	I0429 11:54:07.273078  860437 main.go:141] libmachine: (addons-399337) Calling .Close
	I0429 11:54:07.272978  860437 main.go:141] libmachine: (addons-399337) DBG | Closing plugin on server side
	I0429 11:54:07.272992  860437 main.go:141] libmachine: (addons-399337) DBG | Closing plugin on server side
	I0429 11:54:07.273000  860437 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:54:07.273124  860437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:54:07.273133  860437 main.go:141] libmachine: Making call to close driver server
	I0429 11:54:07.273141  860437 main.go:141] libmachine: (addons-399337) Calling .Close
	I0429 11:54:07.273000  860437 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:54:07.273196  860437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:54:07.273208  860437 addons.go:470] Verifying addon metrics-server=true in "addons-399337"
	I0429 11:54:07.273256  860437 main.go:141] libmachine: (addons-399337) DBG | Closing plugin on server side
	I0429 11:54:07.273302  860437 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:54:07.273313  860437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:54:07.274930  860437 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-399337 service yakd-dashboard -n yakd-dashboard
	
	I0429 11:54:07.273491  860437 main.go:141] libmachine: (addons-399337) DBG | Closing plugin on server side
	I0429 11:54:07.273496  860437 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:54:07.274976  860437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:54:07.281100  860437 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.884469377s)
	I0429 11:54:07.281143  860437 main.go:141] libmachine: Making call to close driver server
	I0429 11:54:07.281154  860437 main.go:141] libmachine: (addons-399337) Calling .Close
	I0429 11:54:07.281376  860437 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:54:07.281395  860437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:54:07.281408  860437 main.go:141] libmachine: Making call to close driver server
	I0429 11:54:07.281416  860437 main.go:141] libmachine: (addons-399337) Calling .Close
	I0429 11:54:07.281455  860437 main.go:141] libmachine: (addons-399337) DBG | Closing plugin on server side
	I0429 11:54:07.281620  860437 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:54:07.281636  860437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:54:07.281660  860437 addons.go:470] Verifying addon ingress=true in "addons-399337"
	I0429 11:54:07.283256  860437 out.go:177] * Verifying ingress addon...
	I0429 11:54:07.285192  860437 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0429 11:54:07.314102  860437 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0429 11:54:07.314129  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:07.491175  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:07.568091  860437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0429 11:54:07.705774  860437 pod_ready.go:102] pod "coredns-7db6d8ff4d-clhv5" in "kube-system" namespace has status "Ready":"False"
	I0429 11:54:07.789961  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:08.002273  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:08.235887  860437 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.173245902s)
	I0429 11:54:08.235933  860437 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.734899186s)
	I0429 11:54:08.235961  860437 main.go:141] libmachine: Making call to close driver server
	I0429 11:54:08.235977  860437 main.go:141] libmachine: (addons-399337) Calling .Close
	I0429 11:54:08.237498  860437 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0429 11:54:08.236350  860437 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:54:08.236375  860437 main.go:141] libmachine: (addons-399337) DBG | Closing plugin on server side
	I0429 11:54:08.239146  860437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:54:08.240264  860437 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0429 11:54:08.239164  860437 main.go:141] libmachine: Making call to close driver server
	I0429 11:54:08.241495  860437 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0429 11:54:08.241517  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0429 11:54:08.240290  860437 main.go:141] libmachine: (addons-399337) Calling .Close
	I0429 11:54:08.241845  860437 main.go:141] libmachine: (addons-399337) DBG | Closing plugin on server side
	I0429 11:54:08.241865  860437 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:54:08.241877  860437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:54:08.241889  860437 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-399337"
	I0429 11:54:08.243290  860437 out.go:177] * Verifying csi-hostpath-driver addon...
	I0429 11:54:08.245455  860437 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0429 11:54:08.269416  860437 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0429 11:54:08.269445  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:08.315152  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:08.359832  860437 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0429 11:54:08.359858  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0429 11:54:08.434032  860437 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0429 11:54:08.434060  860437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0429 11:54:08.491701  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:08.520446  860437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0429 11:54:08.783606  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:08.804347  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:08.991966  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:09.253005  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:09.290274  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:09.495398  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:09.723018  860437 pod_ready.go:102] pod "coredns-7db6d8ff4d-clhv5" in "kube-system" namespace has status "Ready":"False"
	I0429 11:54:09.767917  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:09.823151  860437 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.302667693s)
	I0429 11:54:09.823212  860437 main.go:141] libmachine: Making call to close driver server
	I0429 11:54:09.823240  860437 main.go:141] libmachine: (addons-399337) Calling .Close
	I0429 11:54:09.823147  860437 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.254990456s)
	I0429 11:54:09.823319  860437 main.go:141] libmachine: Making call to close driver server
	I0429 11:54:09.823332  860437 main.go:141] libmachine: (addons-399337) Calling .Close
	I0429 11:54:09.823579  860437 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:54:09.823587  860437 main.go:141] libmachine: (addons-399337) DBG | Closing plugin on server side
	I0429 11:54:09.823597  860437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:54:09.823606  860437 main.go:141] libmachine: Making call to close driver server
	I0429 11:54:09.823614  860437 main.go:141] libmachine: (addons-399337) Calling .Close
	I0429 11:54:09.825705  860437 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:54:09.825718  860437 main.go:141] libmachine: (addons-399337) DBG | Closing plugin on server side
	I0429 11:54:09.825733  860437 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:54:09.825756  860437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:54:09.825766  860437 main.go:141] libmachine: Making call to close driver server
	I0429 11:54:09.825744  860437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:54:09.825776  860437 main.go:141] libmachine: (addons-399337) Calling .Close
	I0429 11:54:09.825705  860437 main.go:141] libmachine: (addons-399337) DBG | Closing plugin on server side
	I0429 11:54:09.826041  860437 main.go:141] libmachine: Successfully made call to close driver server
	I0429 11:54:09.826071  860437 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 11:54:09.826088  860437 main.go:141] libmachine: (addons-399337) DBG | Closing plugin on server side
	I0429 11:54:09.827529  860437 addons.go:470] Verifying addon gcp-auth=true in "addons-399337"
	I0429 11:54:09.829771  860437 out.go:177] * Verifying gcp-auth addon...
	I0429 11:54:09.831204  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:09.832055  860437 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0429 11:54:09.840754  860437 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0429 11:54:09.840771  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:10.000379  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:10.252698  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:10.291680  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:10.336613  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:10.491154  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:10.751911  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:10.790171  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:10.835247  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:10.991396  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:11.250803  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:11.290164  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:11.337135  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:11.491834  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:11.754330  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:11.790678  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:11.836884  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:11.991716  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:12.204684  860437 pod_ready.go:102] pod "coredns-7db6d8ff4d-clhv5" in "kube-system" namespace has status "Ready":"False"
	I0429 11:54:12.252069  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:12.290408  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:12.335844  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:12.491279  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:12.753598  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:12.790762  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:12.836259  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:12.991249  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:13.252134  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:13.290732  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:13.337471  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:13.492340  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:13.759253  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:13.791387  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:13.835932  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:13.994636  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:14.251296  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:14.290544  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:14.336564  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:14.494997  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:14.702752  860437 pod_ready.go:102] pod "coredns-7db6d8ff4d-clhv5" in "kube-system" namespace has status "Ready":"False"
	I0429 11:54:14.754182  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:14.789673  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:14.838786  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:14.993928  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:15.254814  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:15.296108  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:15.336979  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:15.491531  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:15.751619  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:15.792968  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:15.840267  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:15.991592  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:16.251878  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:16.290101  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:16.336824  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:16.492004  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:16.702967  860437 pod_ready.go:102] pod "coredns-7db6d8ff4d-clhv5" in "kube-system" namespace has status "Ready":"False"
	I0429 11:54:16.751797  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:16.789780  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:16.836117  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:16.991744  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:17.252412  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:17.289753  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:17.346516  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:17.491261  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:18.077688  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:18.079396  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:18.080016  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:18.083315  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:18.253619  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:18.289949  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:18.337583  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:18.493369  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:18.752894  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:18.792635  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:18.839604  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:18.995019  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:19.203262  860437 pod_ready.go:102] pod "coredns-7db6d8ff4d-clhv5" in "kube-system" namespace has status "Ready":"False"
	I0429 11:54:19.250798  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:19.289976  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:19.336109  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:19.492153  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:19.752509  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:19.789940  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:19.836240  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:19.993579  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:20.252882  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:20.290934  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:20.336340  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:20.492219  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:20.753956  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:20.790469  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:20.836384  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:20.991703  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:21.251536  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:21.289630  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:21.336664  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:21.491455  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:21.703305  860437 pod_ready.go:102] pod "coredns-7db6d8ff4d-clhv5" in "kube-system" namespace has status "Ready":"False"
	I0429 11:54:21.753721  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:21.790181  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:21.836591  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:21.992802  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:22.251802  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:22.290228  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:22.336336  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:22.496015  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:22.751439  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:22.795389  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:22.842783  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:23.133466  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:23.255955  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:23.290949  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:23.336358  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:23.491971  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:23.705515  860437 pod_ready.go:102] pod "coredns-7db6d8ff4d-clhv5" in "kube-system" namespace has status "Ready":"False"
	I0429 11:54:23.752500  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:23.791429  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:23.835428  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:23.994420  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:24.251827  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:24.289339  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:24.336858  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:24.491352  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:24.757962  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:24.792404  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:24.836127  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:24.991198  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:25.250487  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:25.289791  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:25.336260  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:25.491331  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:25.751784  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:25.752066  860437 pod_ready.go:102] pod "coredns-7db6d8ff4d-clhv5" in "kube-system" namespace has status "Ready":"False"
	I0429 11:54:25.789576  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:25.836561  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:25.991598  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:26.251813  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:26.291893  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:26.335807  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:26.491072  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:26.752330  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:26.789739  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:26.836362  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:26.994432  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:27.262854  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:27.303623  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:27.339311  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:27.491964  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:27.751053  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:27.792465  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:27.835140  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:27.997206  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:28.203246  860437 pod_ready.go:102] pod "coredns-7db6d8ff4d-clhv5" in "kube-system" namespace has status "Ready":"False"
	I0429 11:54:28.251249  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:28.290016  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:28.337144  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:28.492426  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:28.752607  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:28.790033  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:28.835825  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:28.991399  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:29.251474  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:29.289717  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:29.337789  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:29.491162  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:29.755064  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:29.789982  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:29.836485  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:29.991188  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:30.251060  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:30.289173  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:30.335710  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:30.490761  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:30.702203  860437 pod_ready.go:102] pod "coredns-7db6d8ff4d-clhv5" in "kube-system" namespace has status "Ready":"False"
	I0429 11:54:30.751165  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:30.789339  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:30.836122  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:30.991187  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:31.251675  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:31.289702  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:31.337841  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:31.491910  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:32.123403  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:32.127468  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:32.127877  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:32.128631  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:32.307916  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:32.314982  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:32.337364  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:32.491995  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:32.711996  860437 pod_ready.go:102] pod "coredns-7db6d8ff4d-clhv5" in "kube-system" namespace has status "Ready":"False"
	I0429 11:54:32.750319  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:32.790158  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:32.836004  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:32.991993  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:33.251190  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:33.292663  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:33.335269  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:33.492444  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:33.750422  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:33.789234  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:33.835905  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:33.991291  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:34.250886  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:34.290956  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:34.336335  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:34.491707  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:34.750616  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:34.790123  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:34.835818  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:34.991854  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:35.206826  860437 pod_ready.go:102] pod "coredns-7db6d8ff4d-clhv5" in "kube-system" namespace has status "Ready":"False"
	I0429 11:54:35.250797  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:35.291470  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:35.337289  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:35.492014  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:54:35.751815  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:35.790586  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:35.841085  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:35.991237  860437 kapi.go:107] duration metric: took 32.004868609s to wait for kubernetes.io/minikube-addons=registry ...
	I0429 11:54:36.251104  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:36.289554  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:36.338414  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:36.752920  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:36.790120  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:36.839071  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:37.251807  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:37.290393  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:37.336124  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:37.709306  860437 pod_ready.go:102] pod "coredns-7db6d8ff4d-clhv5" in "kube-system" namespace has status "Ready":"False"
	I0429 11:54:37.754069  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:37.791804  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:37.839107  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:38.228997  860437 pod_ready.go:92] pod "coredns-7db6d8ff4d-clhv5" in "kube-system" namespace has status "Ready":"True"
	I0429 11:54:38.229024  860437 pod_ready.go:81] duration metric: took 39.532935205s for pod "coredns-7db6d8ff4d-clhv5" in "kube-system" namespace to be "Ready" ...
	I0429 11:54:38.229042  860437 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-sqvhz" in "kube-system" namespace to be "Ready" ...
	I0429 11:54:38.261958  860437 pod_ready.go:97] error getting pod "coredns-7db6d8ff4d-sqvhz" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-sqvhz" not found
	I0429 11:54:38.261990  860437 pod_ready.go:81] duration metric: took 32.941008ms for pod "coredns-7db6d8ff4d-sqvhz" in "kube-system" namespace to be "Ready" ...
	E0429 11:54:38.262005  860437 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-7db6d8ff4d-sqvhz" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-sqvhz" not found
	I0429 11:54:38.262014  860437 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-399337" in "kube-system" namespace to be "Ready" ...
	I0429 11:54:38.272743  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:38.298875  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:38.299677  860437 pod_ready.go:92] pod "etcd-addons-399337" in "kube-system" namespace has status "Ready":"True"
	I0429 11:54:38.299701  860437 pod_ready.go:81] duration metric: took 37.679684ms for pod "etcd-addons-399337" in "kube-system" namespace to be "Ready" ...
	I0429 11:54:38.299715  860437 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-399337" in "kube-system" namespace to be "Ready" ...
	I0429 11:54:38.307078  860437 pod_ready.go:92] pod "kube-apiserver-addons-399337" in "kube-system" namespace has status "Ready":"True"
	I0429 11:54:38.307106  860437 pod_ready.go:81] duration metric: took 7.379825ms for pod "kube-apiserver-addons-399337" in "kube-system" namespace to be "Ready" ...
	I0429 11:54:38.307119  860437 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-399337" in "kube-system" namespace to be "Ready" ...
	I0429 11:54:38.312954  860437 pod_ready.go:92] pod "kube-controller-manager-addons-399337" in "kube-system" namespace has status "Ready":"True"
	I0429 11:54:38.312971  860437 pod_ready.go:81] duration metric: took 5.845001ms for pod "kube-controller-manager-addons-399337" in "kube-system" namespace to be "Ready" ...
	I0429 11:54:38.312984  860437 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c76rb" in "kube-system" namespace to be "Ready" ...
	I0429 11:54:38.340327  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:38.400033  860437 pod_ready.go:92] pod "kube-proxy-c76rb" in "kube-system" namespace has status "Ready":"True"
	I0429 11:54:38.400067  860437 pod_ready.go:81] duration metric: took 87.075576ms for pod "kube-proxy-c76rb" in "kube-system" namespace to be "Ready" ...
	I0429 11:54:38.400078  860437 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-399337" in "kube-system" namespace to be "Ready" ...
	I0429 11:54:38.892583  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:38.894173  860437 pod_ready.go:92] pod "kube-scheduler-addons-399337" in "kube-system" namespace has status "Ready":"True"
	I0429 11:54:38.894200  860437 pod_ready.go:81] duration metric: took 494.113423ms for pod "kube-scheduler-addons-399337" in "kube-system" namespace to be "Ready" ...
	I0429 11:54:38.894211  860437 pod_ready.go:38] duration metric: took 40.21415552s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 11:54:38.894233  860437 api_server.go:52] waiting for apiserver process to appear ...
	I0429 11:54:38.894297  860437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 11:54:38.894882  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:38.899802  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:38.922385  860437 api_server.go:72] duration metric: took 41.143910009s to wait for apiserver process to appear ...
	I0429 11:54:38.922421  860437 api_server.go:88] waiting for apiserver healthz status ...
	I0429 11:54:38.922448  860437 api_server.go:253] Checking apiserver healthz at https://192.168.39.246:8443/healthz ...
	I0429 11:54:38.934173  860437 api_server.go:279] https://192.168.39.246:8443/healthz returned 200:
	ok
	I0429 11:54:38.935788  860437 api_server.go:141] control plane version: v1.30.0
	I0429 11:54:38.935815  860437 api_server.go:131] duration metric: took 13.386842ms to wait for apiserver health ...
	I0429 11:54:38.935824  860437 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 11:54:39.010694  860437 system_pods.go:59] 18 kube-system pods found
	I0429 11:54:39.010730  860437 system_pods.go:61] "coredns-7db6d8ff4d-clhv5" [c07a5ddb-e2e2-4176-8f7a-1cd22252dc68] Running
	I0429 11:54:39.010739  860437 system_pods.go:61] "csi-hostpath-attacher-0" [bc341d6c-fe79-42a5-a9ba-42f8fe8879e8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0429 11:54:39.010747  860437 system_pods.go:61] "csi-hostpath-resizer-0" [ae1aa144-e4d4-4593-8b40-85d299a1eabc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0429 11:54:39.010756  860437 system_pods.go:61] "csi-hostpathplugin-z79rc" [302a0773-cd0d-4525-9038-138f9454107a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0429 11:54:39.010762  860437 system_pods.go:61] "etcd-addons-399337" [1bcf3b94-bf17-45cf-b890-da6f5cb8e69c] Running
	I0429 11:54:39.010766  860437 system_pods.go:61] "kube-apiserver-addons-399337" [809cc04c-361d-4734-904c-ac96d98b988e] Running
	I0429 11:54:39.010770  860437 system_pods.go:61] "kube-controller-manager-addons-399337" [5e16c584-2773-424e-ab4f-c42bdf5506e0] Running
	I0429 11:54:39.010774  860437 system_pods.go:61] "kube-ingress-dns-minikube" [a7128d26-3cd2-4dff-b1f8-82ae0a0bd9ea] Running
	I0429 11:54:39.010777  860437 system_pods.go:61] "kube-proxy-c76rb" [cd9bc05a-40c1-46f3-bd49-24c202c408c1] Running
	I0429 11:54:39.010785  860437 system_pods.go:61] "kube-scheduler-addons-399337" [c40b5f8e-68fe-4a4f-a5a6-18f1705ffdad] Running
	I0429 11:54:39.010788  860437 system_pods.go:61] "metrics-server-c59844bb4-bdhjz" [6989fee0-f9b4-4dad-afaf-2a05bb7773b0] Running
	I0429 11:54:39.010791  860437 system_pods.go:61] "nvidia-device-plugin-daemonset-xkmsp" [30bb4e14-df21-4bc2-801b-bd1f4be76ca7] Running
	I0429 11:54:39.010794  860437 system_pods.go:61] "registry-proxy-9dsxn" [82994227-ad36-4723-8707-1a12d1acb7b0] Running
	I0429 11:54:39.010797  860437 system_pods.go:61] "registry-rjcm2" [8d55e4da-95cd-4672-947e-b85aad3a526e] Running
	I0429 11:54:39.010804  860437 system_pods.go:61] "snapshot-controller-745499f584-rch45" [50c194cd-8882-4a93-85a8-134e378c9f15] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0429 11:54:39.010809  860437 system_pods.go:61] "snapshot-controller-745499f584-rndlp" [97ebb519-455c-4952-a12b-7e0852485c69] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0429 11:54:39.010814  860437 system_pods.go:61] "storage-provisioner" [08d11614-3a93-42cd-8f7f-0a53e2fd182e] Running
	I0429 11:54:39.010819  860437 system_pods.go:61] "tiller-deploy-6677d64bcd-xzxtg" [5327d74d-9125-4e88-afe1-b0720c1dcce0] Running
	I0429 11:54:39.010834  860437 system_pods.go:74] duration metric: took 75.000456ms to wait for pod list to return data ...
	I0429 11:54:39.010849  860437 default_sa.go:34] waiting for default service account to be created ...
	I0429 11:54:39.200315  860437 default_sa.go:45] found service account: "default"
	I0429 11:54:39.200349  860437 default_sa.go:55] duration metric: took 189.491885ms for default service account to be created ...
	I0429 11:54:39.200360  860437 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 11:54:39.251406  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:39.291413  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:39.337801  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:39.407496  860437 system_pods.go:86] 18 kube-system pods found
	I0429 11:54:39.407533  860437 system_pods.go:89] "coredns-7db6d8ff4d-clhv5" [c07a5ddb-e2e2-4176-8f7a-1cd22252dc68] Running
	I0429 11:54:39.407544  860437 system_pods.go:89] "csi-hostpath-attacher-0" [bc341d6c-fe79-42a5-a9ba-42f8fe8879e8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0429 11:54:39.407553  860437 system_pods.go:89] "csi-hostpath-resizer-0" [ae1aa144-e4d4-4593-8b40-85d299a1eabc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0429 11:54:39.407565  860437 system_pods.go:89] "csi-hostpathplugin-z79rc" [302a0773-cd0d-4525-9038-138f9454107a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0429 11:54:39.407572  860437 system_pods.go:89] "etcd-addons-399337" [1bcf3b94-bf17-45cf-b890-da6f5cb8e69c] Running
	I0429 11:54:39.407579  860437 system_pods.go:89] "kube-apiserver-addons-399337" [809cc04c-361d-4734-904c-ac96d98b988e] Running
	I0429 11:54:39.407589  860437 system_pods.go:89] "kube-controller-manager-addons-399337" [5e16c584-2773-424e-ab4f-c42bdf5506e0] Running
	I0429 11:54:39.407604  860437 system_pods.go:89] "kube-ingress-dns-minikube" [a7128d26-3cd2-4dff-b1f8-82ae0a0bd9ea] Running
	I0429 11:54:39.407610  860437 system_pods.go:89] "kube-proxy-c76rb" [cd9bc05a-40c1-46f3-bd49-24c202c408c1] Running
	I0429 11:54:39.407616  860437 system_pods.go:89] "kube-scheduler-addons-399337" [c40b5f8e-68fe-4a4f-a5a6-18f1705ffdad] Running
	I0429 11:54:39.407622  860437 system_pods.go:89] "metrics-server-c59844bb4-bdhjz" [6989fee0-f9b4-4dad-afaf-2a05bb7773b0] Running
	I0429 11:54:39.407629  860437 system_pods.go:89] "nvidia-device-plugin-daemonset-xkmsp" [30bb4e14-df21-4bc2-801b-bd1f4be76ca7] Running
	I0429 11:54:39.407639  860437 system_pods.go:89] "registry-proxy-9dsxn" [82994227-ad36-4723-8707-1a12d1acb7b0] Running
	I0429 11:54:39.407646  860437 system_pods.go:89] "registry-rjcm2" [8d55e4da-95cd-4672-947e-b85aad3a526e] Running
	I0429 11:54:39.407658  860437 system_pods.go:89] "snapshot-controller-745499f584-rch45" [50c194cd-8882-4a93-85a8-134e378c9f15] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0429 11:54:39.407669  860437 system_pods.go:89] "snapshot-controller-745499f584-rndlp" [97ebb519-455c-4952-a12b-7e0852485c69] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0429 11:54:39.407676  860437 system_pods.go:89] "storage-provisioner" [08d11614-3a93-42cd-8f7f-0a53e2fd182e] Running
	I0429 11:54:39.407683  860437 system_pods.go:89] "tiller-deploy-6677d64bcd-xzxtg" [5327d74d-9125-4e88-afe1-b0720c1dcce0] Running
	I0429 11:54:39.407693  860437 system_pods.go:126] duration metric: took 207.326309ms to wait for k8s-apps to be running ...
	I0429 11:54:39.407707  860437 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 11:54:39.407764  860437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 11:54:39.427249  860437 system_svc.go:56] duration metric: took 19.532094ms WaitForService to wait for kubelet
	I0429 11:54:39.427281  860437 kubeadm.go:576] duration metric: took 41.648813956s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 11:54:39.427307  860437 node_conditions.go:102] verifying NodePressure condition ...
	I0429 11:54:39.600977  860437 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 11:54:39.601043  860437 node_conditions.go:123] node cpu capacity is 2
	I0429 11:54:39.601064  860437 node_conditions.go:105] duration metric: took 173.749869ms to run NodePressure ...
	I0429 11:54:39.601080  860437 start.go:240] waiting for startup goroutines ...
	I0429 11:54:39.751994  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:39.790454  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:39.836322  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:40.251389  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:40.292694  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:40.337948  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:40.752075  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:40.788973  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:40.836602  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:41.255218  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:41.290109  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:41.336198  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:41.755227  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:42.051838  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:42.051989  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:42.252979  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:42.290005  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:42.336292  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:42.751778  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:42.789847  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:42.835975  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:43.254559  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:43.289288  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:43.337316  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:43.751723  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:43.791568  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:43.836885  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:44.253222  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:44.292578  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:44.336820  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:44.754011  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:44.791218  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:44.836263  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:45.519592  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:45.521584  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:45.525091  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:45.751811  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:45.789486  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:45.836474  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:46.253854  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:46.289469  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:46.338195  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:46.751507  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:46.790450  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:46.841827  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:47.251880  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:47.290218  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:47.336394  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:47.753288  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:47.801951  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:47.905573  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:48.253371  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:48.289923  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:48.336225  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:48.756998  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:48.791298  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:48.885013  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:49.251410  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:49.290082  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:49.335903  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:49.761752  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:49.792315  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:49.835599  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:50.255402  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:50.293555  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:50.336878  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:50.856793  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:50.860678  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:50.860835  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:51.255926  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:51.292654  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:51.337580  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:51.752075  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:51.790613  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:51.837878  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:52.254221  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:52.290180  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:52.336109  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:52.751776  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:52.790633  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:52.838643  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:53.253798  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:53.296463  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:53.340903  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:53.750699  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:53.790916  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:53.838889  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:54.282400  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:54.295023  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:54.335948  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:54.754804  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:54.790397  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:54.839821  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:55.251660  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:55.290031  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:55.335970  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:55.753283  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:55.794438  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:55.838234  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:56.251631  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:56.290110  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:56.336198  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:56.753635  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:56.790042  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:56.835371  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:57.251683  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:57.290087  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:57.335441  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:57.750768  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:54:57.789188  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:57.836163  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:58.251567  860437 kapi.go:107] duration metric: took 50.006109412s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0429 11:54:58.289985  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:58.335908  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:58.790353  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:58.835740  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:59.290344  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:59.336186  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:54:59.789675  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:54:59.836028  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:55:00.290445  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:55:00.335958  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:55:00.790885  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:55:00.836194  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:55:01.289905  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:55:01.336226  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:55:01.790371  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:55:01.836476  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:55:02.289715  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:55:02.336692  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:55:02.790593  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:55:02.836719  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:55:03.290391  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:55:03.336311  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:55:03.791425  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:55:03.836303  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:55:04.290779  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:55:04.336946  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:55:04.791623  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:55:04.836662  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:55:05.292475  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:55:05.336253  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:55:05.789853  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:55:05.835661  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:55:06.290405  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:55:06.336216  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:55:06.789674  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:55:06.836962  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:55:07.291062  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:55:07.336384  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:55:07.790956  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:55:07.836431  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:55:08.289564  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:55:08.360027  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:55:08.793139  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:55:08.836079  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:55:09.290716  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:55:09.336623  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:55:09.790634  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:55:09.836521  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:55:10.290869  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:55:10.336278  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:55:10.789902  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:55:10.835362  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:55:11.290219  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:55:11.337574  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:55:11.791441  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:55:11.838160  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:55:12.290221  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:55:12.335824  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:55:12.794025  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:55:12.835637  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:55:13.291278  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:55:13.336591  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:55:13.794167  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:55:13.835768  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:55:14.290824  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:55:14.336708  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:55:14.791318  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:55:14.836849  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:55:15.600190  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:55:15.600796  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:55:15.791293  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:55:15.836328  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:55:16.295827  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:55:16.336233  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:55:16.790281  860437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:55:16.836215  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:55:17.289516  860437 kapi.go:107] duration metric: took 1m10.00432003s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0429 11:55:17.336481  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:55:17.836425  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:55:18.338156  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:55:18.835676  860437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:55:19.336903  860437 kapi.go:107] duration metric: took 1m9.504843382s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0429 11:55:19.338901  860437 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-399337 cluster.
	I0429 11:55:19.340293  860437 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0429 11:55:19.341599  860437 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0429 11:55:19.343070  860437 out.go:177] * Enabled addons: storage-provisioner, nvidia-device-plugin, ingress-dns, cloud-spanner, default-storageclass, helm-tiller, storage-provisioner-rancher, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0429 11:55:19.344600  860437 addons.go:505] duration metric: took 1m21.56606702s for enable addons: enabled=[storage-provisioner nvidia-device-plugin ingress-dns cloud-spanner default-storageclass helm-tiller storage-provisioner-rancher metrics-server inspektor-gadget yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0429 11:55:19.344646  860437 start.go:245] waiting for cluster config update ...
	I0429 11:55:19.344666  860437 start.go:254] writing updated cluster config ...
	I0429 11:55:19.344958  860437 ssh_runner.go:195] Run: rm -f paused
	I0429 11:55:19.396824  860437 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 11:55:19.398820  860437 out.go:177] * Done! kubectl is now configured to use "addons-399337" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	fbb523b9594b5       a416a98b71e22       2 seconds ago        Exited              helper-pod                               0                   18e945983f276       helper-pod-create-pvc-0c0e318d-4448-4498-abd7-025eb4f88ed0
	9959552d53865       e45ec2747dd93       2 seconds ago        Exited              gadget                                   3                   6a313070887d6       gadget-d4jq2
	682baa3893559       beae173ccac6a       2 seconds ago        Exited              registry-test                            0                   501a72ad8d98b       registry-test
	0aee649614977       98f6c3b32d565       3 seconds ago        Exited              helm-test                                0                   0d0d89638e20f       helm-test
	5c5a64d467539       7383c266ef252       5 seconds ago        Running             task-pv-container                        0                   3a0972f7d6300       task-pv-pod
	385046d305b1d       7373e995f4086       10 seconds ago       Running             headlamp                                 0                   0e016f688d822       headlamp-7559bf459f-cqfvg
	4d2101137be20       db2fc13d44d50       24 seconds ago       Running             gcp-auth                                 0                   cdf1ea1c6a270       gcp-auth-5db96cd9b4-h8sch
	a60b40b35afd1       ffcc66479b5ba       25 seconds ago       Running             controller                               0                   d62ed4cd95baf       ingress-nginx-controller-84df5799c-nssm7
	261306b51937c       738351fd438f0       45 seconds ago       Running             csi-snapshotter                          0                   b71b746ac948c       csi-hostpathplugin-z79rc
	70d58dc45162e       931dbfd16f87c       46 seconds ago       Running             csi-provisioner                          0                   b71b746ac948c       csi-hostpathplugin-z79rc
	79aeff8b8ac1f       e899260153aed       47 seconds ago       Running             liveness-probe                           0                   b71b746ac948c       csi-hostpathplugin-z79rc
	dfb8e082efea9       e255e073c508c       48 seconds ago       Running             hostpath                                 0                   b71b746ac948c       csi-hostpathplugin-z79rc
	55b01d3b035f6       88ef14a257f42       50 seconds ago       Running             node-driver-registrar                    0                   b71b746ac948c       csi-hostpathplugin-z79rc
	adbb38b60691d       b29d748098e32       51 seconds ago       Exited              patch                                    0                   f8d158cb30f5a       gcp-auth-certs-patch-z782x
	8e9ae78d62047       b29d748098e32       51 seconds ago       Exited              create                                   0                   157fdc7b0c810       gcp-auth-certs-create-fvrpf
	399d1db4439fa       19a639eda60f0       51 seconds ago       Running             csi-resizer                              0                   f6e82a388aea0       csi-hostpath-resizer-0
	0544e2787166e       a1ed5895ba635       53 seconds ago       Running             csi-external-health-monitor-controller   0                   b71b746ac948c       csi-hostpathplugin-z79rc
	7c8df156bca67       59cbb42146a37       54 seconds ago       Running             csi-attacher                             0                   09555213920d6       csi-hostpath-attacher-0
	9ed1933638cc6       b29d748098e32       55 seconds ago       Exited              patch                                    0                   dd6880eb60ade       ingress-nginx-admission-patch-4hjv5
	1ece80191025c       b29d748098e32       56 seconds ago       Exited              create                                   0                   a05d3fefdee30       ingress-nginx-admission-create-gm9vm
	278f7864cc338       aa61ee9c70bc4       57 seconds ago       Running             volume-snapshot-controller               0                   d2408970d5595       snapshot-controller-745499f584-rndlp
	6a63641f0fa77       aa61ee9c70bc4       57 seconds ago       Running             volume-snapshot-controller               0                   28a9788b64dc8       snapshot-controller-745499f584-rch45
	0b8a1a719d3a5       31de47c733c91       59 seconds ago       Running             yakd                                     0                   4c59ce4606515       yakd-dashboard-5ddbf7d777-wt82q
	0cbf6dbf5164f       e16d1e3a10667       About a minute ago   Running             local-path-provisioner                   0                   814882324dce5       local-path-provisioner-8d985888d-wbvs8
	5c93bcf869433       a24c7c057ec87       About a minute ago   Running             metrics-server                           0                   f284171247d35       metrics-server-c59844bb4-bdhjz
	1e8ab725b0e03       38c5e506fa551       About a minute ago   Running             registry-proxy                           0                   6a8de69971c3d       registry-proxy-9dsxn
	10965d8d98b5a       3f39089e90831       About a minute ago   Running             tiller                                   0                   87cab8eea5846       tiller-deploy-6677d64bcd-xzxtg
	24100a2c2b625       9363667f8aecb       About a minute ago   Running             registry                                 0                   fded25e2cdbee       registry-rjcm2
	533d0f01aa13d       1499ed4fbd0aa       About a minute ago   Running             minikube-ingress-dns                     0                   5ddb7121936cc       kube-ingress-dns-minikube
	19bb2e708280c       6e38f40d628db       About a minute ago   Running             storage-provisioner                      0                   319ff5753d11e       storage-provisioner
	b0a4cb7d4581e       cbb01a7bd410d       About a minute ago   Running             coredns                                  0                   a1f176068227a       coredns-7db6d8ff4d-clhv5
	a30da3d9ae2cc       a0bf559e280cf       About a minute ago   Running             kube-proxy                               0                   01f18983a9c50       kube-proxy-c76rb
	74172e40027ff       259c8277fcbbc       2 minutes ago        Running             kube-scheduler                           0                   0572aa9e3bcfc       kube-scheduler-addons-399337
	fa926b2753efd       3861cfcd7c04c       2 minutes ago        Running             etcd                                     0                   d04abc4ddf197       etcd-addons-399337
	fc24128a4260e       c7aad43836fa5       2 minutes ago        Running             kube-controller-manager                  0                   621435c07d7a1       kube-controller-manager-addons-399337
	4d3f9d1e80187       c42f13656d0b2       2 minutes ago        Running             kube-apiserver                           0                   870bb6b3e7916       kube-apiserver-addons-399337
	
	
	==> containerd <==
	Apr 29 11:55:40 addons-399337 containerd[648]: time="2024-04-29T11:55:40.164403300Z" level=info msg="shim disconnected" id=fbb523b9594b53cfd79706fea64c838a45f33fcb41d8bcfaa565e94661375eda namespace=k8s.io
	Apr 29 11:55:40 addons-399337 containerd[648]: time="2024-04-29T11:55:40.164649353Z" level=warning msg="cleaning up after shim disconnected" id=fbb523b9594b53cfd79706fea64c838a45f33fcb41d8bcfaa565e94661375eda namespace=k8s.io
	Apr 29 11:55:40 addons-399337 containerd[648]: time="2024-04-29T11:55:40.164790008Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Apr 29 11:55:40 addons-399337 containerd[648]: time="2024-04-29T11:55:40.167846951Z" level=info msg="StopPodSandbox for \"0d0d89638e20f5365b80af2482598fbbb39214d25987b1046a7c1ecfe1820f1b\""
	Apr 29 11:55:40 addons-399337 containerd[648]: time="2024-04-29T11:55:40.167937235Z" level=info msg="Container to stop \"0aee649614977c51b83091235ff0d593fd444e6630a793291de312c4817dfdb2\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Apr 29 11:55:40 addons-399337 containerd[648]: time="2024-04-29T11:55:40.257057246Z" level=info msg="shim disconnected" id=0d0d89638e20f5365b80af2482598fbbb39214d25987b1046a7c1ecfe1820f1b namespace=k8s.io
	Apr 29 11:55:40 addons-399337 containerd[648]: time="2024-04-29T11:55:40.258003248Z" level=warning msg="cleaning up after shim disconnected" id=0d0d89638e20f5365b80af2482598fbbb39214d25987b1046a7c1ecfe1820f1b namespace=k8s.io
	Apr 29 11:55:40 addons-399337 containerd[648]: time="2024-04-29T11:55:40.258046569Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Apr 29 11:55:40 addons-399337 containerd[648]: time="2024-04-29T11:55:40.363725188Z" level=info msg="TearDown network for sandbox \"0d0d89638e20f5365b80af2482598fbbb39214d25987b1046a7c1ecfe1820f1b\" successfully"
	Apr 29 11:55:40 addons-399337 containerd[648]: time="2024-04-29T11:55:40.363895381Z" level=info msg="StopPodSandbox for \"0d0d89638e20f5365b80af2482598fbbb39214d25987b1046a7c1ecfe1820f1b\" returns successfully"
	Apr 29 11:55:40 addons-399337 containerd[648]: time="2024-04-29T11:55:40.787201092Z" level=info msg="shim disconnected" id=9959552d53865499547bd8826ca5d2d41f4502a02a0c9095e9645bfbaeff3bec namespace=k8s.io
	Apr 29 11:55:40 addons-399337 containerd[648]: time="2024-04-29T11:55:40.787313276Z" level=warning msg="cleaning up after shim disconnected" id=9959552d53865499547bd8826ca5d2d41f4502a02a0c9095e9645bfbaeff3bec namespace=k8s.io
	Apr 29 11:55:40 addons-399337 containerd[648]: time="2024-04-29T11:55:40.787325103Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Apr 29 11:55:41 addons-399337 containerd[648]: time="2024-04-29T11:55:41.184439984Z" level=info msg="RemoveContainer for \"a476c4d5a4f81ff0ea3f1942fd3a07a81afb9bb2b1bd1d2cd22f220b9ed2c97b\""
	Apr 29 11:55:41 addons-399337 containerd[648]: time="2024-04-29T11:55:41.189521416Z" level=info msg="StopPodSandbox for \"501a72ad8d98b2282b4b2efe1c706eae59257071b2a085c39596024013c7e18a\""
	Apr 29 11:55:41 addons-399337 containerd[648]: time="2024-04-29T11:55:41.189855357Z" level=info msg="Container to stop \"682baa3893559af193dc46c3af758e5f936b7d92d797661fca7acab342799cae\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Apr 29 11:55:41 addons-399337 containerd[648]: time="2024-04-29T11:55:41.201691788Z" level=info msg="RemoveContainer for \"a476c4d5a4f81ff0ea3f1942fd3a07a81afb9bb2b1bd1d2cd22f220b9ed2c97b\" returns successfully"
	Apr 29 11:55:41 addons-399337 containerd[648]: time="2024-04-29T11:55:41.271604216Z" level=info msg="shim disconnected" id=501a72ad8d98b2282b4b2efe1c706eae59257071b2a085c39596024013c7e18a namespace=k8s.io
	Apr 29 11:55:41 addons-399337 containerd[648]: time="2024-04-29T11:55:41.272003540Z" level=warning msg="cleaning up after shim disconnected" id=501a72ad8d98b2282b4b2efe1c706eae59257071b2a085c39596024013c7e18a namespace=k8s.io
	Apr 29 11:55:41 addons-399337 containerd[648]: time="2024-04-29T11:55:41.272076392Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Apr 29 11:55:41 addons-399337 containerd[648]: time="2024-04-29T11:55:41.373375489Z" level=info msg="TearDown network for sandbox \"501a72ad8d98b2282b4b2efe1c706eae59257071b2a085c39596024013c7e18a\" successfully"
	Apr 29 11:55:41 addons-399337 containerd[648]: time="2024-04-29T11:55:41.373430032Z" level=info msg="StopPodSandbox for \"501a72ad8d98b2282b4b2efe1c706eae59257071b2a085c39596024013c7e18a\" returns successfully"
	Apr 29 11:55:42 addons-399337 containerd[648]: time="2024-04-29T11:55:42.113631489Z" level=info msg="StopContainer for \"24100a2c2b625b568bdfc4c93d95c4e5e6daf2dd0ad752fd34fc987fe64cf485\" with timeout 30 (s)"
	Apr 29 11:55:42 addons-399337 containerd[648]: time="2024-04-29T11:55:42.114569678Z" level=info msg="Stop container \"24100a2c2b625b568bdfc4c93d95c4e5e6daf2dd0ad752fd34fc987fe64cf485\" with signal terminated"
	Apr 29 11:55:42 addons-399337 containerd[648]: time="2024-04-29T11:55:42.168597340Z" level=info msg="StopContainer for \"1e8ab725b0e03a9b8e3cc66aeb2a8aa276a2c1b27ae58a2c065909beeab736ec\" with timeout 30 (s)"
	
	
	==> coredns [b0a4cb7d4581ed2f910b275b6aa366ccb1475a5abe93015fed012fa8f868e276] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:46343 - 62989 "HINFO IN 5760912784803502129.6339497702683727883. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025530536s
	[INFO] 10.244.0.7:49585 - 25735 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000745132s
	[INFO] 10.244.0.7:49585 - 33924 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000578s
	[INFO] 10.244.0.7:58417 - 37375 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000145804s
	[INFO] 10.244.0.7:58417 - 31741 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000079551s
	[INFO] 10.244.0.7:50743 - 40213 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000108512s
	[INFO] 10.244.0.7:50743 - 31766 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000070005s
	[INFO] 10.244.0.7:49188 - 42496 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000074219s
	[INFO] 10.244.0.7:49188 - 41474 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000067566s
	[INFO] 10.244.0.7:60981 - 61681 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000067356s
	[INFO] 10.244.0.7:60981 - 3571 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000109031s
	[INFO] 10.244.0.22:53771 - 58972 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000456281s
	[INFO] 10.244.0.22:51728 - 18216 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000566115s
	[INFO] 10.244.0.22:43878 - 1542 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000135781s
	[INFO] 10.244.0.22:47063 - 8945 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000471013s
	[INFO] 10.244.0.22:59819 - 58988 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000207837s
	[INFO] 10.244.0.22:57040 - 31120 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000181694s
	[INFO] 10.244.0.22:34695 - 44658 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000798223s
	[INFO] 10.244.0.22:60930 - 65207 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 382 0.001306744s
	[INFO] 10.244.0.26:55503 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000361954s
	[INFO] 10.244.0.26:51753 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00015921s
	
	
	==> describe nodes <==
	Name:               addons-399337
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-399337
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ac56d99acb3fbd2d6010a41c69273a14230e0332
	                    minikube.k8s.io/name=addons-399337
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T11_53_44_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-399337
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-399337"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 11:53:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-399337
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 11:55:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 11:55:16 +0000   Mon, 29 Apr 2024 11:53:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 11:55:16 +0000   Mon, 29 Apr 2024 11:53:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 11:55:16 +0000   Mon, 29 Apr 2024 11:53:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 11:55:16 +0000   Mon, 29 Apr 2024 11:53:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.246
	  Hostname:    addons-399337
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 c286db315c164f3fabd8d161ace6b6d2
	  System UUID:                c286db31-5c16-4f3f-abd8-d161ace6b6d2
	  Boot ID:                    7e6e03d6-30ce-4ab6-8ed2-ee2ed5c0ad66
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.15
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (25 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     task-pv-pod                                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13s
	  gadget                      gadget-d4jq2                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  gcp-auth                    gcp-auth-5db96cd9b4-h8sch                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  headlamp                    headlamp-7559bf459f-cqfvg                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16s
	  ingress-nginx               ingress-nginx-controller-84df5799c-nssm7                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         96s
	  kube-system                 coredns-7db6d8ff4d-clhv5                                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     105s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 csi-hostpathplugin-z79rc                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 etcd-addons-399337                                            100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         119s
	  kube-system                 kube-apiserver-addons-399337                                  250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  kube-system                 kube-controller-manager-addons-399337                         200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 kube-proxy-c76rb                                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 kube-scheduler-addons-399337                                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  kube-system                 metrics-server-c59844bb4-bdhjz                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         99s
	  kube-system                 registry-proxy-9dsxn                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 registry-rjcm2                                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 snapshot-controller-745499f584-rch45                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  kube-system                 snapshot-controller-745499f584-rndlp                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  kube-system                 storage-provisioner                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 tiller-deploy-6677d64bcd-xzxtg                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  local-path-storage          helper-pod-create-pvc-0c0e318d-4448-4498-abd7-025eb4f88ed0    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  local-path-storage          local-path-provisioner-8d985888d-wbvs8                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-wt82q                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     98s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             588Mi (15%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 103s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  2m4s (x8 over 2m5s)  kubelet          Node addons-399337 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s (x8 over 2m5s)  kubelet          Node addons-399337 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s (x7 over 2m5s)  kubelet          Node addons-399337 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 119s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  119s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  119s                 kubelet          Node addons-399337 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    119s                 kubelet          Node addons-399337 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     119s                 kubelet          Node addons-399337 status is now: NodeHasSufficientPID
	  Normal  NodeReady                118s                 kubelet          Node addons-399337 status is now: NodeReady
	  Normal  RegisteredNode           106s                 node-controller  Node addons-399337 event: Registered Node addons-399337 in Controller
	
	
	==> dmesg <==
	[  +0.287574] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +5.218564] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.054724] kauditd_printk_skb: 158 callbacks suppressed
	[  +0.480245] systemd-fstab-generator[692]: Ignoring "noauto" option for root device
	[  +5.430875] systemd-fstab-generator[866]: Ignoring "noauto" option for root device
	[  +0.061504] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.000065] systemd-fstab-generator[1230]: Ignoring "noauto" option for root device
	[  +0.078809] kauditd_printk_skb: 69 callbacks suppressed
	[ +14.236945] systemd-fstab-generator[1425]: Ignoring "noauto" option for root device
	[  +0.122422] kauditd_printk_skb: 21 callbacks suppressed
	[Apr29 11:54] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.086007] kauditd_printk_skb: 114 callbacks suppressed
	[  +7.087677] kauditd_printk_skb: 97 callbacks suppressed
	[  +8.025089] kauditd_printk_skb: 2 callbacks suppressed
	[  +7.793592] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.874793] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.990303] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.009334] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.113370] kauditd_printk_skb: 66 callbacks suppressed
	[Apr29 11:55] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.979586] kauditd_printk_skb: 44 callbacks suppressed
	[ +10.973753] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.039351] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.226362] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.006196] kauditd_printk_skb: 67 callbacks suppressed
	
	
	==> etcd [fa926b2753efdb4e92ce428d676dabe157bc211889e7a46bbf11094969e2bc68] <==
	{"level":"info","ts":"2024-04-29T11:55:15.586601Z","caller":"traceutil/trace.go:171","msg":"trace[379002339] linearizableReadLoop","detail":"{readStateIndex:1218; appliedIndex:1217; }","duration":"308.385545ms","start":"2024-04-29T11:55:15.278198Z","end":"2024-04-29T11:55:15.586584Z","steps":["trace[379002339] 'read index received'  (duration: 308.193149ms)","trace[379002339] 'applied index is now lower than readState.Index'  (duration: 191.672µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T11:55:15.586831Z","caller":"traceutil/trace.go:171","msg":"trace[287924917] transaction","detail":"{read_only:false; response_revision:1187; number_of_response:1; }","duration":"369.674987ms","start":"2024-04-29T11:55:15.217147Z","end":"2024-04-29T11:55:15.586822Z","steps":["trace[287924917] 'process raft request'  (duration: 369.293699ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T11:55:15.587015Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T11:55:15.217133Z","time spent":"369.816255ms","remote":"127.0.0.1:53492","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1183 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-04-29T11:55:15.587423Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"309.216659ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14392"}
	{"level":"info","ts":"2024-04-29T11:55:15.587475Z","caller":"traceutil/trace.go:171","msg":"trace[583570774] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1187; }","duration":"309.292675ms","start":"2024-04-29T11:55:15.278174Z","end":"2024-04-29T11:55:15.587467Z","steps":["trace[583570774] 'agreement among raft nodes before linearized reading'  (duration: 309.123998ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T11:55:15.587497Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T11:55:15.27816Z","time spent":"309.331577ms","remote":"127.0.0.1:53510","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":14415,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"warn","ts":"2024-04-29T11:55:15.58832Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"263.461037ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11487"}
	{"level":"info","ts":"2024-04-29T11:55:15.588544Z","caller":"traceutil/trace.go:171","msg":"trace[1781666861] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1187; }","duration":"263.707739ms","start":"2024-04-29T11:55:15.324826Z","end":"2024-04-29T11:55:15.588533Z","steps":["trace[1781666861] 'agreement among raft nodes before linearized reading'  (duration: 262.836727ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T11:55:18.302165Z","caller":"traceutil/trace.go:171","msg":"trace[852793553] transaction","detail":"{read_only:false; response_revision:1210; number_of_response:1; }","duration":"109.67126ms","start":"2024-04-29T11:55:18.192473Z","end":"2024-04-29T11:55:18.302144Z","steps":["trace[852793553] 'process raft request'  (duration: 106.731932ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T11:55:31.150448Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.661868ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3966"}
	{"level":"info","ts":"2024-04-29T11:55:31.150535Z","caller":"traceutil/trace.go:171","msg":"trace[438285351] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1321; }","duration":"127.779572ms","start":"2024-04-29T11:55:31.022742Z","end":"2024-04-29T11:55:31.150521Z","steps":["trace[438285351] 'range keys from in-memory index tree'  (duration: 127.54152ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T11:55:31.150744Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"430.700506ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:1 size:183"}
	{"level":"info","ts":"2024-04-29T11:55:31.150788Z","caller":"traceutil/trace.go:171","msg":"trace[670992184] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:1321; }","duration":"430.772554ms","start":"2024-04-29T11:55:30.720008Z","end":"2024-04-29T11:55:31.150781Z","steps":["trace[670992184] 'range keys from in-memory index tree'  (duration: 430.645175ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T11:55:31.150831Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T11:55:30.719995Z","time spent":"430.830255ms","remote":"127.0.0.1:53528","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":1,"response size":206,"request content":"key:\"/registry/serviceaccounts/default/default\" "}
	{"level":"warn","ts":"2024-04-29T11:55:31.150961Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"404.014101ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6013"}
	{"level":"info","ts":"2024-04-29T11:55:31.151003Z","caller":"traceutil/trace.go:171","msg":"trace[1107815989] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1321; }","duration":"404.07777ms","start":"2024-04-29T11:55:30.746916Z","end":"2024-04-29T11:55:31.150994Z","steps":["trace[1107815989] 'range keys from in-memory index tree'  (duration: 403.939206ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T11:55:31.151021Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T11:55:30.746903Z","time spent":"404.113014ms","remote":"127.0.0.1:53510","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":2,"response size":6036,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"warn","ts":"2024-04-29T11:55:31.151183Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.631183ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T11:55:31.151281Z","caller":"traceutil/trace.go:171","msg":"trace[698113303] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1321; }","duration":"189.749926ms","start":"2024-04-29T11:55:30.961525Z","end":"2024-04-29T11:55:31.151275Z","steps":["trace[698113303] 'range keys from in-memory index tree'  (duration: 189.588602ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T11:55:35.944535Z","caller":"traceutil/trace.go:171","msg":"trace[782565370] linearizableReadLoop","detail":"{readStateIndex:1389; appliedIndex:1388; }","duration":"196.859922ms","start":"2024-04-29T11:55:35.747657Z","end":"2024-04-29T11:55:35.944517Z","steps":["trace[782565370] 'read index received'  (duration: 196.698983ms)","trace[782565370] 'applied index is now lower than readState.Index'  (duration: 160.508µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T11:55:35.944599Z","caller":"traceutil/trace.go:171","msg":"trace[999833268] transaction","detail":"{read_only:false; response_revision:1349; number_of_response:1; }","duration":"235.053309ms","start":"2024-04-29T11:55:35.70954Z","end":"2024-04-29T11:55:35.944594Z","steps":["trace[999833268] 'process raft request'  (duration: 234.848051ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T11:55:35.94475Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.093928ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6013"}
	{"level":"info","ts":"2024-04-29T11:55:35.944767Z","caller":"traceutil/trace.go:171","msg":"trace[80138172] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1349; }","duration":"197.143871ms","start":"2024-04-29T11:55:35.747619Z","end":"2024-04-29T11:55:35.944763Z","steps":["trace[80138172] 'agreement among raft nodes before linearized reading'  (duration: 197.047273ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T11:55:35.944816Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"184.989779ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T11:55:35.944849Z","caller":"traceutil/trace.go:171","msg":"trace[133518403] range","detail":"{range_begin:/registry/runtimeclasses/; range_end:/registry/runtimeclasses0; response_count:0; response_revision:1349; }","duration":"185.047214ms","start":"2024-04-29T11:55:35.759794Z","end":"2024-04-29T11:55:35.944841Z","steps":["trace[133518403] 'agreement among raft nodes before linearized reading'  (duration: 184.992477ms)"],"step_count":1}
	
	
	==> gcp-auth [4d2101137be207ac1ca23ddb05d894c71a4211d92cdef9de35ce83a897ebcb6b] <==
	2024/04/29 11:55:18 GCP Auth Webhook started!
	2024/04/29 11:55:26 Ready to marshal response ...
	2024/04/29 11:55:26 Ready to write response ...
	2024/04/29 11:55:26 Ready to marshal response ...
	2024/04/29 11:55:26 Ready to write response ...
	2024/04/29 11:55:26 Ready to marshal response ...
	2024/04/29 11:55:26 Ready to write response ...
	2024/04/29 11:55:29 Ready to marshal response ...
	2024/04/29 11:55:29 Ready to write response ...
	2024/04/29 11:55:30 Ready to marshal response ...
	2024/04/29 11:55:30 Ready to write response ...
	2024/04/29 11:55:30 Ready to marshal response ...
	2024/04/29 11:55:30 Ready to write response ...
	2024/04/29 11:55:38 Ready to marshal response ...
	2024/04/29 11:55:38 Ready to write response ...
	2024/04/29 11:55:38 Ready to marshal response ...
	2024/04/29 11:55:38 Ready to write response ...
	2024/04/29 11:55:42 Ready to marshal response ...
	2024/04/29 11:55:42 Ready to write response ...
	
	
	==> kernel <==
	 11:55:42 up 2 min,  0 users,  load average: 3.55, 1.87, 0.73
	Linux addons-399337 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4d3f9d1e80187faf4a968059a2b455605daf8e771abc86bff00b40d042367661] <==
	I0429 11:54:04.389144       1 alloc.go:330] "allocated clusterIPs" service="yakd-dashboard/yakd-dashboard" clusterIPs={"IPv4":"10.108.245.213"}
	I0429 11:54:04.754950       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0429 11:54:05.194006       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0429 11:54:05.194043       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0429 11:54:05.591851       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0429 11:54:05.591882       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0429 11:54:05.633584       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0429 11:54:05.633636       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0429 11:54:06.746311       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller" clusterIPs={"IPv4":"10.96.194.145"}
	I0429 11:54:06.816749       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller-admission" clusterIPs={"IPv4":"10.106.91.180"}
	I0429 11:54:06.882746       1 controller.go:615] quota admission added evaluator for: jobs.batch
	I0429 11:54:07.892290       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.102.48.245"}
	I0429 11:54:07.907483       1 controller.go:615] quota admission added evaluator for: statefulsets.apps
	I0429 11:54:08.112635       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.97.221.192"}
	I0429 11:54:09.592042       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.97.171.110"}
	W0429 11:54:38.923086       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 11:54:38.923194       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0429 11:54:38.923891       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.76.237:443/apis/metrics.k8s.io/v1beta1: Get "https://10.105.76.237:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.105.76.237:443: connect: connection refused
	E0429 11:54:38.926541       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.76.237:443/apis/metrics.k8s.io/v1beta1: Get "https://10.105.76.237:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.105.76.237:443: connect: connection refused
	E0429 11:54:38.932186       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.76.237:443/apis/metrics.k8s.io/v1beta1: Get "https://10.105.76.237:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.105.76.237:443: connect: connection refused
	I0429 11:54:39.053746       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0429 11:55:26.887556       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.92.63"}
	I0429 11:55:42.635649       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0429 11:55:42.854514       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.68.73"}
	
	
	==> kube-controller-manager [fc24128a4260e66ad1e6b5c33e8992dc50ea7990572fdc10cf05d7eafcaae5c1] <==
	I0429 11:54:54.262970       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0429 11:54:54.824839       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-5ddbf7d777" duration="14.83919ms"
	I0429 11:54:54.825360       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-5ddbf7d777" duration="62.528µs"
	I0429 11:54:59.727657       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/snapshot-controller-745499f584" duration="6.137133ms"
	I0429 11:54:59.727800       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/snapshot-controller-745499f584" duration="85.381µs"
	I0429 11:55:17.013901       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-84df5799c" duration="78.977µs"
	I0429 11:55:19.047809       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="12.986614ms"
	I0429 11:55:19.048151       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="86.088µs"
	I0429 11:55:24.019694       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0429 11:55:24.021599       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0429 11:55:24.074041       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0429 11:55:24.080044       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0429 11:55:24.981106       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-6dc8d859f6" duration="6.684µs"
	I0429 11:55:26.938995       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7559bf459f" duration="23.203634ms"
	E0429 11:55:26.939044       1 replica_set.go:557] sync "headlamp/headlamp-7559bf459f" failed with pods "headlamp-7559bf459f-" is forbidden: error looking up service account headlamp/headlamp: serviceaccount "headlamp" not found
	I0429 11:55:27.029954       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7559bf459f" duration="90.86996ms"
	I0429 11:55:27.057939       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7559bf459f" duration="27.402899ms"
	I0429 11:55:27.058508       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7559bf459f" duration="317.612µs"
	I0429 11:55:27.069159       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7559bf459f" duration="44.325µs"
	I0429 11:55:31.271845       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-84df5799c" duration="22.487503ms"
	I0429 11:55:31.272922       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-84df5799c" duration="66.952µs"
	I0429 11:55:32.130174       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7559bf459f" duration="47.656µs"
	I0429 11:55:32.189507       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7559bf459f" duration="29.331876ms"
	I0429 11:55:32.189709       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7559bf459f" duration="116.031µs"
	I0429 11:55:42.092382       1 replica_set.go:676] "Finished syncing" logger="replicationcontroller-controller" kind="ReplicationController" key="kube-system/registry" duration="9.705µs"
	
	
	==> kube-proxy [a30da3d9ae2cc2aaa6041219d17672d1b28325c04d1dbd194edd3aa6e655356e] <==
	I0429 11:53:59.014339       1 server_linux.go:69] "Using iptables proxy"
	I0429 11:53:59.045289       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.246"]
	I0429 11:53:59.302553       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 11:53:59.302604       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 11:53:59.302635       1 server_linux.go:165] "Using iptables Proxier"
	I0429 11:53:59.382134       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 11:53:59.382428       1 server.go:872] "Version info" version="v1.30.0"
	I0429 11:53:59.382464       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 11:53:59.393305       1 config.go:192] "Starting service config controller"
	I0429 11:53:59.393322       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 11:53:59.393347       1 config.go:101] "Starting endpoint slice config controller"
	I0429 11:53:59.393350       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 11:53:59.393701       1 config.go:319] "Starting node config controller"
	I0429 11:53:59.393707       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 11:53:59.493797       1 shared_informer.go:320] Caches are synced for node config
	I0429 11:53:59.493840       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 11:53:59.493890       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [74172e40027ff916e75d38241306657c80f53497039129a00474c4faaa8ef589] <==
	W0429 11:53:41.408822       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 11:53:41.409359       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 11:53:41.408863       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 11:53:41.409671       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0429 11:53:41.409880       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 11:53:41.410043       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 11:53:41.410301       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 11:53:41.410624       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 11:53:41.411096       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 11:53:41.411796       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 11:53:42.324744       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 11:53:42.324791       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 11:53:42.422097       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 11:53:42.422145       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 11:53:42.426329       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 11:53:42.427593       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 11:53:42.486914       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 11:53:42.486960       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 11:53:42.513849       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 11:53:42.514098       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 11:53:42.556120       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 11:53:42.557626       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 11:53:42.689073       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0429 11:53:42.689125       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0429 11:53:45.685959       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 11:55:42 addons-399337 kubelet[1237]: I0429 11:55:42.739940    1237 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/922173a1-b000-44d6-85a6-db7395a01086-gcp-creds\") pod \"922173a1-b000-44d6-85a6-db7395a01086\" (UID: \"922173a1-b000-44d6-85a6-db7395a01086\") "
	Apr 29 11:55:42 addons-399337 kubelet[1237]: I0429 11:55:42.739954    1237 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/922173a1-b000-44d6-85a6-db7395a01086-data\") pod \"922173a1-b000-44d6-85a6-db7395a01086\" (UID: \"922173a1-b000-44d6-85a6-db7395a01086\") "
	Apr 29 11:55:42 addons-399337 kubelet[1237]: I0429 11:55:42.739975    1237 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtvds\" (UniqueName: \"kubernetes.io/projected/922173a1-b000-44d6-85a6-db7395a01086-kube-api-access-qtvds\") pod \"922173a1-b000-44d6-85a6-db7395a01086\" (UID: \"922173a1-b000-44d6-85a6-db7395a01086\") "
	Apr 29 11:55:42 addons-399337 kubelet[1237]: I0429 11:55:42.741439    1237 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/922173a1-b000-44d6-85a6-db7395a01086-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "922173a1-b000-44d6-85a6-db7395a01086" (UID: "922173a1-b000-44d6-85a6-db7395a01086"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Apr 29 11:55:42 addons-399337 kubelet[1237]: I0429 11:55:42.741601    1237 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/922173a1-b000-44d6-85a6-db7395a01086-data" (OuterVolumeSpecName: "data") pod "922173a1-b000-44d6-85a6-db7395a01086" (UID: "922173a1-b000-44d6-85a6-db7395a01086"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Apr 29 11:55:42 addons-399337 kubelet[1237]: I0429 11:55:42.742159    1237 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/922173a1-b000-44d6-85a6-db7395a01086-script" (OuterVolumeSpecName: "script") pod "922173a1-b000-44d6-85a6-db7395a01086" (UID: "922173a1-b000-44d6-85a6-db7395a01086"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Apr 29 11:55:42 addons-399337 kubelet[1237]: I0429 11:55:42.743074    1237 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/922173a1-b000-44d6-85a6-db7395a01086-kube-api-access-qtvds" (OuterVolumeSpecName: "kube-api-access-qtvds") pod "922173a1-b000-44d6-85a6-db7395a01086" (UID: "922173a1-b000-44d6-85a6-db7395a01086"). InnerVolumeSpecName "kube-api-access-qtvds". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 29 11:55:42 addons-399337 kubelet[1237]: I0429 11:55:42.815862    1237 topology_manager.go:215] "Topology Admit Handler" podUID="88b75fc7-0abc-4b4c-998a-c7b065cdb75a" podNamespace="default" podName="nginx"
	Apr 29 11:55:42 addons-399337 kubelet[1237]: E0429 11:55:42.815959    1237 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="597b35bd-0bf7-42a7-b8fe-a76ec531a9e6" containerName="registry-test"
	Apr 29 11:55:42 addons-399337 kubelet[1237]: E0429 11:55:42.815971    1237 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d26924c6-2104-408b-b28b-466d72296f07" containerName="helm-test"
	Apr 29 11:55:42 addons-399337 kubelet[1237]: E0429 11:55:42.815981    1237 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="922173a1-b000-44d6-85a6-db7395a01086" containerName="helper-pod"
	Apr 29 11:55:42 addons-399337 kubelet[1237]: I0429 11:55:42.816018    1237 memory_manager.go:354] "RemoveStaleState removing state" podUID="922173a1-b000-44d6-85a6-db7395a01086" containerName="helper-pod"
	Apr 29 11:55:42 addons-399337 kubelet[1237]: I0429 11:55:42.816025    1237 memory_manager.go:354] "RemoveStaleState removing state" podUID="597b35bd-0bf7-42a7-b8fe-a76ec531a9e6" containerName="registry-test"
	Apr 29 11:55:42 addons-399337 kubelet[1237]: I0429 11:55:42.816031    1237 memory_manager.go:354] "RemoveStaleState removing state" podUID="d26924c6-2104-408b-b28b-466d72296f07" containerName="helm-test"
	Apr 29 11:55:42 addons-399337 kubelet[1237]: I0429 11:55:42.842686    1237 reconciler_common.go:289] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/922173a1-b000-44d6-85a6-db7395a01086-script\") on node \"addons-399337\" DevicePath \"\""
	Apr 29 11:55:42 addons-399337 kubelet[1237]: I0429 11:55:42.842739    1237 reconciler_common.go:289] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/922173a1-b000-44d6-85a6-db7395a01086-gcp-creds\") on node \"addons-399337\" DevicePath \"\""
	Apr 29 11:55:42 addons-399337 kubelet[1237]: I0429 11:55:42.842748    1237 reconciler_common.go:289] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/922173a1-b000-44d6-85a6-db7395a01086-data\") on node \"addons-399337\" DevicePath \"\""
	Apr 29 11:55:42 addons-399337 kubelet[1237]: I0429 11:55:42.842761    1237 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-qtvds\" (UniqueName: \"kubernetes.io/projected/922173a1-b000-44d6-85a6-db7395a01086-kube-api-access-qtvds\") on node \"addons-399337\" DevicePath \"\""
	Apr 29 11:55:42 addons-399337 kubelet[1237]: I0429 11:55:42.943546    1237 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lfqh\" (UniqueName: \"kubernetes.io/projected/8d55e4da-95cd-4672-947e-b85aad3a526e-kube-api-access-9lfqh\") pod \"8d55e4da-95cd-4672-947e-b85aad3a526e\" (UID: \"8d55e4da-95cd-4672-947e-b85aad3a526e\") "
	Apr 29 11:55:42 addons-399337 kubelet[1237]: I0429 11:55:42.943623    1237 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl8pp\" (UniqueName: \"kubernetes.io/projected/88b75fc7-0abc-4b4c-998a-c7b065cdb75a-kube-api-access-wl8pp\") pod \"nginx\" (UID: \"88b75fc7-0abc-4b4c-998a-c7b065cdb75a\") " pod="default/nginx"
	Apr 29 11:55:42 addons-399337 kubelet[1237]: I0429 11:55:42.943645    1237 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/88b75fc7-0abc-4b4c-998a-c7b065cdb75a-gcp-creds\") pod \"nginx\" (UID: \"88b75fc7-0abc-4b4c-998a-c7b065cdb75a\") " pod="default/nginx"
	Apr 29 11:55:42 addons-399337 kubelet[1237]: I0429 11:55:42.948860    1237 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d55e4da-95cd-4672-947e-b85aad3a526e-kube-api-access-9lfqh" (OuterVolumeSpecName: "kube-api-access-9lfqh") pod "8d55e4da-95cd-4672-947e-b85aad3a526e" (UID: "8d55e4da-95cd-4672-947e-b85aad3a526e"). InnerVolumeSpecName "kube-api-access-9lfqh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 29 11:55:43 addons-399337 kubelet[1237]: I0429 11:55:43.044297    1237 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcbxb\" (UniqueName: \"kubernetes.io/projected/82994227-ad36-4723-8707-1a12d1acb7b0-kube-api-access-tcbxb\") pod \"82994227-ad36-4723-8707-1a12d1acb7b0\" (UID: \"82994227-ad36-4723-8707-1a12d1acb7b0\") "
	Apr 29 11:55:43 addons-399337 kubelet[1237]: I0429 11:55:43.044677    1237 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-9lfqh\" (UniqueName: \"kubernetes.io/projected/8d55e4da-95cd-4672-947e-b85aad3a526e-kube-api-access-9lfqh\") on node \"addons-399337\" DevicePath \"\""
	Apr 29 11:55:43 addons-399337 kubelet[1237]: I0429 11:55:43.053420    1237 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82994227-ad36-4723-8707-1a12d1acb7b0-kube-api-access-tcbxb" (OuterVolumeSpecName: "kube-api-access-tcbxb") pod "82994227-ad36-4723-8707-1a12d1acb7b0" (UID: "82994227-ad36-4723-8707-1a12d1acb7b0"). InnerVolumeSpecName "kube-api-access-tcbxb". PluginName "kubernetes.io/projected", VolumeGidValue ""
	
	
	==> storage-provisioner [19bb2e708280c24f15f94ee7ba71fa0c5db7d6877a374ca9af8af1fb5fcc3fec] <==
	I0429 11:54:05.715659       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0429 11:54:05.799628       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0429 11:54:05.799704       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0429 11:54:05.870901       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0429 11:54:05.871082       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-399337_ca2b4b06-165b-453e-b128-369ddbe82d0f!
	I0429 11:54:05.886625       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9269d6c7-bb91-4e80-a726-6b48997ddd96", APIVersion:"v1", ResourceVersion:"699", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-399337_ca2b4b06-165b-453e-b128-369ddbe82d0f became leader
	I0429 11:54:05.971432       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-399337_ca2b4b06-165b-453e-b128-369ddbe82d0f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-399337 -n addons-399337
helpers_test.go:261: (dbg) Run:  kubectl --context addons-399337 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx test-local-path ingress-nginx-admission-create-gm9vm ingress-nginx-admission-patch-4hjv5
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/HelmTiller]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-399337 describe pod nginx test-local-path ingress-nginx-admission-create-gm9vm ingress-nginx-admission-patch-4hjv5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-399337 describe pod nginx test-local-path ingress-nginx-admission-create-gm9vm ingress-nginx-admission-patch-4hjv5: exit status 1 (88.554665ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-399337/192.168.39.246
	Start Time:       Mon, 29 Apr 2024 11:55:42 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wl8pp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-wl8pp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  1s    default-scheduler  Successfully assigned default/nginx to addons-399337
	  Normal  Pulling    0s    kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m54m2 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-m54m2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:            <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-gm9vm" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-4hjv5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-399337 describe pod nginx test-local-path ingress-nginx-admission-create-gm9vm ingress-nginx-admission-patch-4hjv5: exit status 1
--- FAIL: TestAddons/parallel/HelmTiller (18.73s)

                                                
                                    

Test pass (288/325)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.3
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.15
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.30.0/json-events 4.22
13 TestDownloadOnly/v1.30.0/preload-exists 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.07
18 TestDownloadOnly/v1.30.0/DeleteAll 0.15
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.6
22 TestOffline 124.6
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 140.86
29 TestAddons/parallel/Registry 22.9
30 TestAddons/parallel/Ingress 20.41
31 TestAddons/parallel/InspektorGadget 11.55
32 TestAddons/parallel/MetricsServer 5.78
35 TestAddons/parallel/CSI 40.89
36 TestAddons/parallel/Headlamp 12.03
37 TestAddons/parallel/CloudSpanner 5.7
38 TestAddons/parallel/LocalPath 54.57
39 TestAddons/parallel/NvidiaDevicePlugin 6.6
40 TestAddons/parallel/Yakd 6.01
43 TestAddons/serial/GCPAuth/Namespaces 0.12
44 TestAddons/StoppedEnableDisable 92.77
45 TestCertOptions 74.52
46 TestCertExpiration 281.9
48 TestForceSystemdFlag 91.3
49 TestForceSystemdEnv 53.61
51 TestKVMDriverInstallOrUpdate 1.19
55 TestErrorSpam/setup 43.41
56 TestErrorSpam/start 0.39
57 TestErrorSpam/status 0.77
58 TestErrorSpam/pause 1.57
59 TestErrorSpam/unpause 1.59
60 TestErrorSpam/stop 4.96
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 59.38
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 45.25
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.07
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.4
72 TestFunctional/serial/CacheCmd/cache/add_local 1.21
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
74 TestFunctional/serial/CacheCmd/cache/list 0.06
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.83
77 TestFunctional/serial/CacheCmd/cache/delete 0.13
78 TestFunctional/serial/MinikubeKubectlCmd 0.12
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
80 TestFunctional/serial/ExtraConfig 36.78
81 TestFunctional/serial/ComponentHealth 0.07
82 TestFunctional/serial/LogsCmd 1.4
83 TestFunctional/serial/LogsFileCmd 1.46
84 TestFunctional/serial/InvalidService 3.58
86 TestFunctional/parallel/ConfigCmd 0.44
87 TestFunctional/parallel/DashboardCmd 11.8
88 TestFunctional/parallel/DryRun 0.31
89 TestFunctional/parallel/InternationalLanguage 0.16
90 TestFunctional/parallel/StatusCmd 1
94 TestFunctional/parallel/ServiceCmdConnect 10.9
95 TestFunctional/parallel/AddonsCmd 0.15
96 TestFunctional/parallel/PersistentVolumeClaim 40.18
98 TestFunctional/parallel/SSHCmd 0.5
99 TestFunctional/parallel/CpCmd 1.46
100 TestFunctional/parallel/MySQL 27.94
101 TestFunctional/parallel/FileSync 0.24
102 TestFunctional/parallel/CertSync 1.53
106 TestFunctional/parallel/NodeLabels 0.08
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.49
110 TestFunctional/parallel/License 0.2
111 TestFunctional/parallel/Version/short 0.3
112 TestFunctional/parallel/Version/components 0.7
113 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
114 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
115 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
116 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
117 TestFunctional/parallel/ImageCommands/ImageBuild 3.7
118 TestFunctional/parallel/ImageCommands/Setup 1.01
119 TestFunctional/parallel/ServiceCmd/DeployApp 11.2
129 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.49
130 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.89
131 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.13
132 TestFunctional/parallel/ServiceCmd/List 0.34
133 TestFunctional/parallel/ServiceCmd/JSONOutput 0.49
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.43
135 TestFunctional/parallel/ServiceCmd/Format 0.35
136 TestFunctional/parallel/ServiceCmd/URL 0.35
137 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
138 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
139 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
140 TestFunctional/parallel/ProfileCmd/profile_not_create 0.36
141 TestFunctional/parallel/ProfileCmd/profile_list 0.32
142 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
143 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.08
144 TestFunctional/parallel/MountCmd/any-port 7.87
145 TestFunctional/parallel/ImageCommands/ImageRemove 0.63
146 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.3
147 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.18
148 TestFunctional/parallel/MountCmd/specific-port 1.84
149 TestFunctional/parallel/MountCmd/VerifyCleanup 1.34
150 TestFunctional/delete_addon-resizer_images 0.07
151 TestFunctional/delete_my-image_image 0.02
152 TestFunctional/delete_minikube_cached_images 0.01
156 TestMultiControlPlane/serial/StartCluster 192.88
157 TestMultiControlPlane/serial/DeployApp 5.01
158 TestMultiControlPlane/serial/PingHostFromPods 1.37
159 TestMultiControlPlane/serial/AddWorkerNode 45.79
160 TestMultiControlPlane/serial/NodeLabels 0.07
161 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.56
162 TestMultiControlPlane/serial/CopyFile 13.87
163 TestMultiControlPlane/serial/StopSecondaryNode 92.39
164 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.42
165 TestMultiControlPlane/serial/RestartSecondaryNode 43.29
166 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.57
167 TestMultiControlPlane/serial/RestartClusterKeepsNodes 478.64
168 TestMultiControlPlane/serial/DeleteSecondaryNode 6.91
169 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.4
170 TestMultiControlPlane/serial/StopCluster 274.78
171 TestMultiControlPlane/serial/RestartCluster 117.25
172 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.41
173 TestMultiControlPlane/serial/AddSecondaryNode 73.88
174 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.58
178 TestJSONOutput/start/Command 59.02
179 TestJSONOutput/start/Audit 0
181 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/pause/Command 0.75
185 TestJSONOutput/pause/Audit 0
187 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/unpause/Command 0.66
191 TestJSONOutput/unpause/Audit 0
193 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/stop/Command 7.34
197 TestJSONOutput/stop/Audit 0
199 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
201 TestErrorJSONOutput 0.22
206 TestMainNoArgs 0.06
207 TestMinikubeProfile 90.09
210 TestMountStart/serial/StartWithMountFirst 27.78
211 TestMountStart/serial/VerifyMountFirst 0.41
212 TestMountStart/serial/StartWithMountSecond 29.27
213 TestMountStart/serial/VerifyMountSecond 0.39
214 TestMountStart/serial/DeleteFirst 0.7
215 TestMountStart/serial/VerifyMountPostDelete 0.4
216 TestMountStart/serial/Stop 1.32
217 TestMountStart/serial/RestartStopped 22.46
218 TestMountStart/serial/VerifyMountPostStop 0.41
221 TestMultiNode/serial/FreshStart2Nodes 103.48
222 TestMultiNode/serial/DeployApp2Nodes 3.83
223 TestMultiNode/serial/PingHostFrom2Pods 0.87
224 TestMultiNode/serial/AddNode 35.21
225 TestMultiNode/serial/MultiNodeLabels 0.06
226 TestMultiNode/serial/ProfileList 0.23
227 TestMultiNode/serial/CopyFile 7.57
228 TestMultiNode/serial/StopNode 2.27
229 TestMultiNode/serial/StartAfterStop 25.42
230 TestMultiNode/serial/RestartKeepsNodes 293.74
231 TestMultiNode/serial/DeleteNode 2.38
232 TestMultiNode/serial/StopMultiNode 183.29
233 TestMultiNode/serial/RestartMultiNode 79.86
234 TestMultiNode/serial/ValidateNameConflict 44.04
239 TestPreload 228.42
241 TestScheduledStopUnix 115.2
245 TestRunningBinaryUpgrade 194.55
247 TestKubernetesUpgrade 185.1
250 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
251 TestNoKubernetes/serial/StartWithK8s 95.94
252 TestNoKubernetes/serial/StartWithStopK8s 46.25
253 TestStoppedBinaryUpgrade/Setup 0.45
254 TestStoppedBinaryUpgrade/Upgrade 166.1
255 TestNoKubernetes/serial/Start 36.52
256 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
257 TestNoKubernetes/serial/ProfileList 4.65
258 TestNoKubernetes/serial/Stop 2.37
267 TestPause/serial/Start 62.31
268 TestNoKubernetes/serial/StartNoArgs 47.78
276 TestNetworkPlugins/group/false 3.45
280 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
281 TestPause/serial/SecondStartNoReconfiguration 96.54
282 TestStoppedBinaryUpgrade/MinikubeLogs 0.96
283 TestPause/serial/Pause 0.74
284 TestPause/serial/VerifyStatus 0.28
285 TestPause/serial/Unpause 0.66
286 TestPause/serial/PauseAgain 0.83
287 TestPause/serial/DeletePaused 1.2
289 TestStartStop/group/old-k8s-version/serial/FirstStart 178.66
290 TestPause/serial/VerifyDeletedResources 0.28
292 TestStartStop/group/embed-certs/serial/FirstStart 87.52
294 TestStartStop/group/no-preload/serial/FirstStart 139.32
295 TestStartStop/group/embed-certs/serial/DeployApp 8.34
296 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.09
297 TestStartStop/group/embed-certs/serial/Stop 91.85
298 TestStartStop/group/no-preload/serial/DeployApp 8.31
300 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 97.94
301 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.05
302 TestStartStop/group/no-preload/serial/Stop 92.5
303 TestStartStop/group/old-k8s-version/serial/DeployApp 7.43
304 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.06
305 TestStartStop/group/old-k8s-version/serial/Stop 91.77
306 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
307 TestStartStop/group/embed-certs/serial/SecondStart 302.16
308 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
309 TestStartStop/group/no-preload/serial/SecondStart 296.44
310 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.35
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.02
312 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.74
313 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
314 TestStartStop/group/old-k8s-version/serial/SecondStart 458.9
315 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
316 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 295.81
317 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
318 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
319 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
320 TestStartStop/group/embed-certs/serial/Pause 2.77
322 TestStartStop/group/newest-cni/serial/FirstStart 58.86
323 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
324 TestStartStop/group/newest-cni/serial/DeployApp 0
325 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.12
326 TestStartStop/group/newest-cni/serial/Stop 2.34
327 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.09
328 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
329 TestStartStop/group/newest-cni/serial/SecondStart 33.9
330 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
331 TestStartStop/group/no-preload/serial/Pause 2.88
332 TestNetworkPlugins/group/auto/Start 84.58
333 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
334 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
335 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.33
336 TestStartStop/group/newest-cni/serial/Pause 2.65
337 TestNetworkPlugins/group/kindnet/Start 101.13
338 TestNetworkPlugins/group/auto/KubeletFlags 0.25
339 TestNetworkPlugins/group/auto/NetCatPod 9.23
340 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
341 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
342 TestNetworkPlugins/group/auto/DNS 0.17
343 TestNetworkPlugins/group/auto/Localhost 0.14
344 TestNetworkPlugins/group/auto/HairPin 0.14
345 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
346 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.97
347 TestNetworkPlugins/group/calico/Start 91.01
348 TestNetworkPlugins/group/custom-flannel/Start 104.47
349 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
350 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
351 TestNetworkPlugins/group/kindnet/NetCatPod 10.3
352 TestNetworkPlugins/group/kindnet/DNS 0.17
353 TestNetworkPlugins/group/kindnet/Localhost 0.16
354 TestNetworkPlugins/group/kindnet/HairPin 0.15
355 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
356 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
357 TestNetworkPlugins/group/enable-default-cni/Start 72.8
358 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
359 TestStartStop/group/old-k8s-version/serial/Pause 3.23
360 TestNetworkPlugins/group/flannel/Start 96.38
361 TestNetworkPlugins/group/calico/ControllerPod 6.01
362 TestNetworkPlugins/group/calico/KubeletFlags 0.24
363 TestNetworkPlugins/group/calico/NetCatPod 10.26
364 TestNetworkPlugins/group/calico/DNS 0.17
365 TestNetworkPlugins/group/calico/Localhost 0.14
366 TestNetworkPlugins/group/calico/HairPin 0.13
367 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
368 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.26
369 TestNetworkPlugins/group/custom-flannel/DNS 0.28
370 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
371 TestNetworkPlugins/group/custom-flannel/HairPin 0.25
372 TestNetworkPlugins/group/bridge/Start 101.2
373 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
374 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.27
375 TestNetworkPlugins/group/enable-default-cni/DNS 15.93
376 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
377 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
378 TestNetworkPlugins/group/flannel/ControllerPod 6.01
379 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
380 TestNetworkPlugins/group/flannel/NetCatPod 11.25
381 TestNetworkPlugins/group/flannel/DNS 0.19
382 TestNetworkPlugins/group/flannel/Localhost 0.14
383 TestNetworkPlugins/group/flannel/HairPin 0.14
384 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
385 TestNetworkPlugins/group/bridge/NetCatPod 9.23
386 TestNetworkPlugins/group/bridge/DNS 0.16
387 TestNetworkPlugins/group/bridge/Localhost 0.12
388 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (8.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-158460 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-158460 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (8.302877643s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-158460
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-158460: exit status 85 (75.203773ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-158460 | jenkins | v1.33.0 | 29 Apr 24 11:52 UTC |          |
	|         | -p download-only-158460        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 11:52:44
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 11:52:44.420187  859893 out.go:291] Setting OutFile to fd 1 ...
	I0429 11:52:44.420440  859893 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:52:44.420449  859893 out.go:304] Setting ErrFile to fd 2...
	I0429 11:52:44.420453  859893 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:52:44.420639  859893 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-852552/.minikube/bin
	W0429 11:52:44.420781  859893 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18773-852552/.minikube/config/config.json: open /home/jenkins/minikube-integration/18773-852552/.minikube/config/config.json: no such file or directory
	I0429 11:52:44.421403  859893 out.go:298] Setting JSON to true
	I0429 11:52:44.422435  859893 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":5712,"bootTime":1714385852,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 11:52:44.422508  859893 start.go:139] virtualization: kvm guest
	I0429 11:52:44.424912  859893 out.go:97] [download-only-158460] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 11:52:44.426256  859893 out.go:169] MINIKUBE_LOCATION=18773
	W0429 11:52:44.425021  859893 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18773-852552/.minikube/cache/preloaded-tarball: no such file or directory
	I0429 11:52:44.425060  859893 notify.go:220] Checking for updates...
	I0429 11:52:44.428854  859893 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 11:52:44.430091  859893 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18773-852552/kubeconfig
	I0429 11:52:44.431447  859893 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18773-852552/.minikube
	I0429 11:52:44.432627  859893 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0429 11:52:44.435043  859893 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0429 11:52:44.435314  859893 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 11:52:44.468169  859893 out.go:97] Using the kvm2 driver based on user configuration
	I0429 11:52:44.468210  859893 start.go:297] selected driver: kvm2
	I0429 11:52:44.468220  859893 start.go:901] validating driver "kvm2" against <nil>
	I0429 11:52:44.468586  859893 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 11:52:44.468666  859893 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18773-852552/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 11:52:44.485226  859893 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0429 11:52:44.485306  859893 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 11:52:44.485919  859893 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0429 11:52:44.486091  859893 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0429 11:52:44.486180  859893 cni.go:84] Creating CNI manager for ""
	I0429 11:52:44.486195  859893 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0429 11:52:44.486203  859893 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 11:52:44.486286  859893 start.go:340] cluster config:
	{Name:download-only-158460 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-158460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 11:52:44.486503  859893 iso.go:125] acquiring lock: {Name:mk8b8ddae761cd3484839905e26ad9b8e12585e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 11:52:44.488671  859893 out.go:97] Downloading VM boot image ...
	I0429 11:52:44.488748  859893 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18773-852552/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0429 11:52:47.277094  859893 out.go:97] Starting "download-only-158460" primary control-plane node in "download-only-158460" cluster
	I0429 11:52:47.277115  859893 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0429 11:52:47.298305  859893 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0429 11:52:47.298361  859893 cache.go:56] Caching tarball of preloaded images
	I0429 11:52:47.298514  859893 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0429 11:52:47.300097  859893 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0429 11:52:47.300131  859893 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0429 11:52:47.335050  859893 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/18773-852552/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-158460 host does not exist
	  To start a cluster, run: "minikube start -p download-only-158460"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-158460
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (4.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-509997 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-509997 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (4.220286003s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (4.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-509997
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-509997: exit status 85 (73.036536ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-158460 | jenkins | v1.33.0 | 29 Apr 24 11:52 UTC |                     |
	|         | -p download-only-158460        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0 | 29 Apr 24 11:52 UTC | 29 Apr 24 11:52 UTC |
	| delete  | -p download-only-158460        | download-only-158460 | jenkins | v1.33.0 | 29 Apr 24 11:52 UTC | 29 Apr 24 11:52 UTC |
	| start   | -o=json --download-only        | download-only-509997 | jenkins | v1.33.0 | 29 Apr 24 11:52 UTC |                     |
	|         | -p download-only-509997        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 11:52:53
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 11:52:53.085253  860069 out.go:291] Setting OutFile to fd 1 ...
	I0429 11:52:53.085375  860069 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:52:53.085384  860069 out.go:304] Setting ErrFile to fd 2...
	I0429 11:52:53.085388  860069 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:52:53.085597  860069 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-852552/.minikube/bin
	I0429 11:52:53.086232  860069 out.go:298] Setting JSON to true
	I0429 11:52:53.087199  860069 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":5721,"bootTime":1714385852,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 11:52:53.087270  860069 start.go:139] virtualization: kvm guest
	I0429 11:52:53.089590  860069 out.go:97] [download-only-509997] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 11:52:53.091191  860069 out.go:169] MINIKUBE_LOCATION=18773
	I0429 11:52:53.089773  860069 notify.go:220] Checking for updates...
	I0429 11:52:53.093869  860069 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 11:52:53.095146  860069 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18773-852552/kubeconfig
	I0429 11:52:53.096301  860069 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18773-852552/.minikube
	I0429 11:52:53.097521  860069 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-509997 host does not exist
	  To start a cluster, run: "minikube start -p download-only-509997"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-509997
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-540254 --alsologtostderr --binary-mirror http://127.0.0.1:38627 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-540254" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-540254
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestOffline (124.6s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-457767 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-457767 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (2m3.488106791s)
helpers_test.go:175: Cleaning up "offline-containerd-457767" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-457767
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-457767: (1.111769958s)
--- PASS: TestOffline (124.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-399337
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-399337: exit status 85 (64.948521ms)

                                                
                                                
-- stdout --
	* Profile "addons-399337" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-399337"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-399337
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-399337: exit status 85 (65.755942ms)

                                                
                                                
-- stdout --
	* Profile "addons-399337" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-399337"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (140.86s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-399337 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-399337 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m20.859874551s)
--- PASS: TestAddons/Setup (140.86s)

                                                
                                    
x
+
TestAddons/parallel/Registry (22.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 23.346483ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-rjcm2" [8d55e4da-95cd-4672-947e-b85aad3a526e] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.008883232s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-9dsxn" [82994227-ad36-4723-8707-1a12d1acb7b0] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006862607s
addons_test.go:340: (dbg) Run:  kubectl --context addons-399337 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-399337 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-399337 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (10.943105734s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-399337 ip
2024/04/29 11:55:41 [DEBUG] GET http://192.168.39.246:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-399337 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (22.90s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-399337 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-399337 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-399337 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [88b75fc7-0abc-4b4c-998a-c7b065cdb75a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [88b75fc7-0abc-4b4c-998a-c7b065cdb75a] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004744364s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-399337 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-399337 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-399337 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.246
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-399337 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-399337 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-399337 addons disable ingress --alsologtostderr -v=1: (8.239007858s)
--- PASS: TestAddons/parallel/Ingress (20.41s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.55s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-d4jq2" [62d12aeb-32b0-46f0-8550-0eb9a30dbb3a] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.008499059s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-399337
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-399337: (6.539316063s)
--- PASS: TestAddons/parallel/InspektorGadget (11.55s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.78s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 4.780853ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-bdhjz" [6989fee0-f9b4-4dad-afaf-2a05bb7773b0] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009160984s
addons_test.go:415: (dbg) Run:  kubectl --context addons-399337 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-399337 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.78s)

                                                
                                    
x
+
TestAddons/parallel/CSI (40.89s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 5.237119ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-399337 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-399337 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-399337 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-399337 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-399337 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-399337 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-399337 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [608604a2-713d-4827-9e6a-0acfd778abcb] Pending
helpers_test.go:344: "task-pv-pod" [608604a2-713d-4827-9e6a-0acfd778abcb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [608604a2-713d-4827-9e6a-0acfd778abcb] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.004781536s
addons_test.go:584: (dbg) Run:  kubectl --context addons-399337 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-399337 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-399337 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-399337 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-399337 delete pod task-pv-pod: (1.479617536s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-399337 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-399337 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-399337 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-399337 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-399337 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-399337 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-399337 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-399337 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [580c3d4e-5992-4739-8bc0-ee7cecc15aad] Pending
helpers_test.go:344: "task-pv-pod-restore" [580c3d4e-5992-4739-8bc0-ee7cecc15aad] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [580c3d4e-5992-4739-8bc0-ee7cecc15aad] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004731492s
addons_test.go:626: (dbg) Run:  kubectl --context addons-399337 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-399337 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-399337 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-399337 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-399337 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.712162176s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-399337 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (40.89s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-399337 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-399337 --alsologtostderr -v=1: (1.024646667s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7559bf459f-cqfvg" [18d07aa5-a206-4fb8-8bd2-ac06c1259418] Pending
helpers_test.go:344: "headlamp-7559bf459f-cqfvg" [18d07aa5-a206-4fb8-8bd2-ac06c1259418] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7559bf459f-cqfvg" [18d07aa5-a206-4fb8-8bd2-ac06c1259418] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.00454218s
--- PASS: TestAddons/parallel/Headlamp (12.03s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6dc8d859f6-lcc44" [85feb7f2-9509-477f-81d7-cb4ad647bf72] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005654983s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-399337
--- PASS: TestAddons/parallel/CloudSpanner (5.70s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.57s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-399337 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-399337 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-399337 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-399337 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-399337 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-399337 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-399337 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-399337 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-399337 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [d1c2f3d9-194a-4c04-b97b-05ef6e14d93c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [d1c2f3d9-194a-4c04-b97b-05ef6e14d93c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [d1c2f3d9-194a-4c04-b97b-05ef6e14d93c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.002838078s
addons_test.go:891: (dbg) Run:  kubectl --context addons-399337 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-399337 ssh "cat /opt/local-path-provisioner/pvc-0c0e318d-4448-4498-abd7-025eb4f88ed0_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-399337 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-399337 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-399337 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-399337 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.699366093s)
--- PASS: TestAddons/parallel/LocalPath (54.57s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.6s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-xkmsp" [30bb4e14-df21-4bc2-801b-bd1f4be76ca7] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.006633852s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-399337
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.60s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-wt82q" [5d943255-ff1b-4a5f-b807-e3460709fc04] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00473966s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-399337 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-399337 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.77s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-399337
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-399337: (1m32.449230174s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-399337
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-399337
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-399337
--- PASS: TestAddons/StoppedEnableDisable (92.77s)

                                                
                                    
x
+
TestCertOptions (74.52s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-151348 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-151348 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m13.22246448s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-151348 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-151348 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-151348 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-151348" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-151348
--- PASS: TestCertOptions (74.52s)

                                                
                                    
x
+
TestCertExpiration (281.9s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-288715 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-288715 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m26.117011966s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-288715 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-288715 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (14.759361099s)
helpers_test.go:175: Cleaning up "cert-expiration-288715" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-288715
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-288715: (1.024811447s)
--- PASS: TestCertExpiration (281.90s)

                                                
                                    
x
+
TestForceSystemdFlag (91.3s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-861606 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E0429 12:51:50.831325  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/functional-765881/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-861606 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m30.251731019s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-861606 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-861606" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-861606
--- PASS: TestForceSystemdFlag (91.30s)

                                                
                                    
x
+
TestForceSystemdEnv (53.61s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-128049 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-128049 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (52.565508901s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-128049 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-128049" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-128049
--- PASS: TestForceSystemdEnv (53.61s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.19s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.19s)

                                                
                                    
x
+
TestErrorSpam/setup (43.41s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-809104 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-809104 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-809104 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-809104 --driver=kvm2  --container-runtime=containerd: (43.413598838s)
--- PASS: TestErrorSpam/setup (43.41s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-809104 --log_dir /tmp/nospam-809104 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-809104 --log_dir /tmp/nospam-809104 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-809104 --log_dir /tmp/nospam-809104 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-809104 --log_dir /tmp/nospam-809104 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-809104 --log_dir /tmp/nospam-809104 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-809104 --log_dir /tmp/nospam-809104 status
--- PASS: TestErrorSpam/status (0.77s)

                                                
                                    
x
+
TestErrorSpam/pause (1.57s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-809104 --log_dir /tmp/nospam-809104 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-809104 --log_dir /tmp/nospam-809104 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-809104 --log_dir /tmp/nospam-809104 pause
--- PASS: TestErrorSpam/pause (1.57s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.59s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-809104 --log_dir /tmp/nospam-809104 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-809104 --log_dir /tmp/nospam-809104 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-809104 --log_dir /tmp/nospam-809104 unpause
--- PASS: TestErrorSpam/unpause (1.59s)

                                                
                                    
x
+
TestErrorSpam/stop (4.96s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-809104 --log_dir /tmp/nospam-809104 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-809104 --log_dir /tmp/nospam-809104 stop: (1.492561659s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-809104 --log_dir /tmp/nospam-809104 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-809104 --log_dir /tmp/nospam-809104 stop: (2.063889781s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-809104 --log_dir /tmp/nospam-809104 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-809104 --log_dir /tmp/nospam-809104 stop: (1.407926819s)
--- PASS: TestErrorSpam/stop (4.96s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18773-852552/.minikube/files/etc/test/nested/copy/859881/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (59.38s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-765881 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-765881 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (59.381186169s)
--- PASS: TestFunctional/serial/StartWithProxy (59.38s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (45.25s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-765881 --alsologtostderr -v=8
E0429 12:00:19.410817  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/client.crt: no such file or directory
E0429 12:00:19.416887  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/client.crt: no such file or directory
E0429 12:00:19.427161  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/client.crt: no such file or directory
E0429 12:00:19.447292  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/client.crt: no such file or directory
E0429 12:00:19.487666  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/client.crt: no such file or directory
E0429 12:00:19.568380  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/client.crt: no such file or directory
E0429 12:00:19.728861  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/client.crt: no such file or directory
E0429 12:00:20.049551  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/client.crt: no such file or directory
E0429 12:00:20.690566  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/client.crt: no such file or directory
E0429 12:00:21.971019  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/client.crt: no such file or directory
E0429 12:00:24.531990  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/client.crt: no such file or directory
E0429 12:00:29.653224  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/client.crt: no such file or directory
E0429 12:00:39.893555  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-765881 --alsologtostderr -v=8: (45.247010522s)
functional_test.go:659: soft start took 45.247808525s for "functional-765881" cluster.
--- PASS: TestFunctional/serial/SoftStart (45.25s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-765881 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 cache add registry.k8s.io/pause:3.1
E0429 12:01:00.373801  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/client.crt: no such file or directory
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-765881 cache add registry.k8s.io/pause:3.1: (1.100845546s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-765881 cache add registry.k8s.io/pause:3.3: (1.17563178s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-765881 cache add registry.k8s.io/pause:latest: (1.120196625s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-765881 /tmp/TestFunctionalserialCacheCmdcacheadd_local1691369217/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 cache add minikube-local-cache-test:functional-765881
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 cache delete minikube-local-cache-test:functional-765881
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-765881
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-765881 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (224.467925ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-765881 cache reload: (1.113219377s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 kubectl -- --context functional-765881 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-765881 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.78s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-765881 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0429 12:01:41.334082  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-765881 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.781629182s)
functional_test.go:757: restart took 36.781758388s for "functional-765881" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.78s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-765881 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-765881 logs: (1.39558379s)
--- PASS: TestFunctional/serial/LogsCmd (1.40s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 logs --file /tmp/TestFunctionalserialLogsFileCmd1087906156/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-765881 logs --file /tmp/TestFunctionalserialLogsFileCmd1087906156/001/logs.txt: (1.460849595s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.58s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-765881 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-765881
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-765881: exit status 115 (291.097152ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.49:32149 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-765881 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.58s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-765881 config get cpus: exit status 14 (65.789331ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-765881 config get cpus: exit status 14 (62.679615ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-765881 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-765881 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 866791: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.80s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-765881 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-765881 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (161.318287ms)

                                                
                                                
-- stdout --
	* [functional-765881] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18773
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18773-852552/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18773-852552/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 12:02:05.332524  866480 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:02:05.332858  866480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:02:05.332874  866480 out.go:304] Setting ErrFile to fd 2...
	I0429 12:02:05.332881  866480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:02:05.333207  866480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-852552/.minikube/bin
	I0429 12:02:05.334061  866480 out.go:298] Setting JSON to false
	I0429 12:02:05.335580  866480 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6273,"bootTime":1714385852,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 12:02:05.335695  866480 start.go:139] virtualization: kvm guest
	I0429 12:02:05.338272  866480 out.go:177] * [functional-765881] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 12:02:05.339690  866480 notify.go:220] Checking for updates...
	I0429 12:02:05.341028  866480 out.go:177]   - MINIKUBE_LOCATION=18773
	I0429 12:02:05.342606  866480 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 12:02:05.344076  866480 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18773-852552/kubeconfig
	I0429 12:02:05.345278  866480 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18773-852552/.minikube
	I0429 12:02:05.346521  866480 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 12:02:05.347863  866480 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 12:02:05.349711  866480 config.go:182] Loaded profile config "functional-765881": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0429 12:02:05.350184  866480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:02:05.350223  866480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:02:05.366166  866480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43129
	I0429 12:02:05.366712  866480 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:02:05.367317  866480 main.go:141] libmachine: Using API Version  1
	I0429 12:02:05.367334  866480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:02:05.367728  866480 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:02:05.367944  866480 main.go:141] libmachine: (functional-765881) Calling .DriverName
	I0429 12:02:05.368193  866480 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 12:02:05.368481  866480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:02:05.368540  866480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:02:05.384435  866480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44075
	I0429 12:02:05.384935  866480 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:02:05.385393  866480 main.go:141] libmachine: Using API Version  1
	I0429 12:02:05.385416  866480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:02:05.385746  866480 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:02:05.385965  866480 main.go:141] libmachine: (functional-765881) Calling .DriverName
	I0429 12:02:05.419017  866480 out.go:177] * Using the kvm2 driver based on existing profile
	I0429 12:02:05.420446  866480 start.go:297] selected driver: kvm2
	I0429 12:02:05.420463  866480 start.go:901] validating driver "kvm2" against &{Name:functional-765881 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:functional-765881 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.49 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:02:05.420573  866480 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 12:02:05.423006  866480 out.go:177] 
	W0429 12:02:05.424372  866480 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0429 12:02:05.425455  866480 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-765881 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-765881 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-765881 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (155.464188ms)

                                                
                                                
-- stdout --
	* [functional-765881] minikube v1.33.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18773
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18773-852552/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18773-852552/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 12:02:05.646014  866536 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:02:05.646148  866536 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:02:05.646161  866536 out.go:304] Setting ErrFile to fd 2...
	I0429 12:02:05.646168  866536 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:02:05.646462  866536 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-852552/.minikube/bin
	I0429 12:02:05.647028  866536 out.go:298] Setting JSON to false
	I0429 12:02:05.648185  866536 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6274,"bootTime":1714385852,"procs":246,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 12:02:05.648252  866536 start.go:139] virtualization: kvm guest
	I0429 12:02:05.650520  866536 out.go:177] * [functional-765881] minikube v1.33.0 sur Ubuntu 20.04 (kvm/amd64)
	I0429 12:02:05.651771  866536 out.go:177]   - MINIKUBE_LOCATION=18773
	I0429 12:02:05.651859  866536 notify.go:220] Checking for updates...
	I0429 12:02:05.654153  866536 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 12:02:05.655398  866536 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18773-852552/kubeconfig
	I0429 12:02:05.656655  866536 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18773-852552/.minikube
	I0429 12:02:05.657965  866536 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 12:02:05.659327  866536 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 12:02:05.660949  866536 config.go:182] Loaded profile config "functional-765881": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0429 12:02:05.661372  866536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:02:05.661420  866536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:02:05.676794  866536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34039
	I0429 12:02:05.677277  866536 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:02:05.677959  866536 main.go:141] libmachine: Using API Version  1
	I0429 12:02:05.677986  866536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:02:05.678401  866536 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:02:05.678627  866536 main.go:141] libmachine: (functional-765881) Calling .DriverName
	I0429 12:02:05.678957  866536 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 12:02:05.679284  866536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:02:05.679332  866536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:02:05.694316  866536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33873
	I0429 12:02:05.694685  866536 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:02:05.695170  866536 main.go:141] libmachine: Using API Version  1
	I0429 12:02:05.695190  866536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:02:05.695515  866536 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:02:05.695718  866536 main.go:141] libmachine: (functional-765881) Calling .DriverName
	I0429 12:02:05.727326  866536 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0429 12:02:05.728445  866536 start.go:297] selected driver: kvm2
	I0429 12:02:05.728461  866536 start.go:901] validating driver "kvm2" against &{Name:functional-765881 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:functional-765881 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.49 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:02:05.728619  866536 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 12:02:05.730629  866536 out.go:177] 
	W0429 12:02:05.731964  866536 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0429 12:02:05.733200  866536 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-765881 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-765881 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-499w2" [a8859e55-9ad6-4c84-8cf3-fc41a94dc9dc] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-499w2" [a8859e55-9ad6-4c84-8cf3-fc41a94dc9dc] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004208378s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.49:31560
functional_test.go:1671: http://192.168.39.49:31560: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-499w2

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.49:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.49:31560
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.90s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (40.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [c4dee513-7729-494f-84b5-e40deb15e4dd] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005689343s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-765881 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-765881 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-765881 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-765881 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-765881 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8c995169-c9e4-4222-b075-0577ad98bac4] Pending
helpers_test.go:344: "sp-pod" [8c995169-c9e4-4222-b075-0577ad98bac4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8c995169-c9e4-4222-b075-0577ad98bac4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004622879s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-765881 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-765881 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-765881 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7bac722b-dbf5-4900-81d0-3708903ffd0a] Pending
helpers_test.go:344: "sp-pod" [7bac722b-dbf5-4900-81d0-3708903ffd0a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [7bac722b-dbf5-4900-81d0-3708903ffd0a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.005176715s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-765881 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (40.18s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 ssh -n functional-765881 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 cp functional-765881:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1964945615/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 ssh -n functional-765881 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 ssh -n functional-765881 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-765881 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-67jv7" [12f26df1-1134-46c4-be89-f30cd2026829] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-67jv7" [12f26df1-1134-46c4-be89-f30cd2026829] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.006736597s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-765881 exec mysql-64454c8b5c-67jv7 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-765881 exec mysql-64454c8b5c-67jv7 -- mysql -ppassword -e "show databases;": exit status 1 (144.241723ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-765881 exec mysql-64454c8b5c-67jv7 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-765881 exec mysql-64454c8b5c-67jv7 -- mysql -ppassword -e "show databases;": exit status 1 (130.293705ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-765881 exec mysql-64454c8b5c-67jv7 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-765881 exec mysql-64454c8b5c-67jv7 -- mysql -ppassword -e "show databases;": exit status 1 (125.799049ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-765881 exec mysql-64454c8b5c-67jv7 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.94s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/859881/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 ssh "sudo cat /etc/test/nested/copy/859881/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/859881.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 ssh "sudo cat /etc/ssl/certs/859881.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/859881.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 ssh "sudo cat /usr/share/ca-certificates/859881.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/8598812.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 ssh "sudo cat /etc/ssl/certs/8598812.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/8598812.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 ssh "sudo cat /usr/share/ca-certificates/8598812.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-765881 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-765881 ssh "sudo systemctl is-active docker": exit status 1 (252.874262ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-765881 ssh "sudo systemctl is-active crio": exit status 1 (239.811485ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 version --short
--- PASS: TestFunctional/parallel/Version/short (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-765881 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-765881
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-765881
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-765881 image ls --format short --alsologtostderr:
I0429 12:02:17.567638  867504 out.go:291] Setting OutFile to fd 1 ...
I0429 12:02:17.567911  867504 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 12:02:17.567922  867504 out.go:304] Setting ErrFile to fd 2...
I0429 12:02:17.567927  867504 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 12:02:17.568129  867504 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-852552/.minikube/bin
I0429 12:02:17.568770  867504 config.go:182] Loaded profile config "functional-765881": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0429 12:02:17.568870  867504 config.go:182] Loaded profile config "functional-765881": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0429 12:02:17.569399  867504 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0429 12:02:17.569447  867504 main.go:141] libmachine: Launching plugin server for driver kvm2
I0429 12:02:17.585983  867504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32857
I0429 12:02:17.586794  867504 main.go:141] libmachine: () Calling .GetVersion
I0429 12:02:17.588975  867504 main.go:141] libmachine: Using API Version  1
I0429 12:02:17.589050  867504 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 12:02:17.589571  867504 main.go:141] libmachine: () Calling .GetMachineName
I0429 12:02:17.589812  867504 main.go:141] libmachine: (functional-765881) Calling .GetState
I0429 12:02:17.591789  867504 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0429 12:02:17.591861  867504 main.go:141] libmachine: Launching plugin server for driver kvm2
I0429 12:02:17.607124  867504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35827
I0429 12:02:17.607737  867504 main.go:141] libmachine: () Calling .GetVersion
I0429 12:02:17.608245  867504 main.go:141] libmachine: Using API Version  1
I0429 12:02:17.608271  867504 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 12:02:17.608583  867504 main.go:141] libmachine: () Calling .GetMachineName
I0429 12:02:17.608763  867504 main.go:141] libmachine: (functional-765881) Calling .DriverName
I0429 12:02:17.609089  867504 ssh_runner.go:195] Run: systemctl --version
I0429 12:02:17.609118  867504 main.go:141] libmachine: (functional-765881) Calling .GetSSHHostname
I0429 12:02:17.611627  867504 main.go:141] libmachine: (functional-765881) DBG | domain functional-765881 has defined MAC address 52:54:00:7f:07:35 in network mk-functional-765881
I0429 12:02:17.612033  867504 main.go:141] libmachine: (functional-765881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:07:35", ip: ""} in network mk-functional-765881: {Iface:virbr1 ExpiryTime:2024-04-29 12:59:29 +0000 UTC Type:0 Mac:52:54:00:7f:07:35 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:functional-765881 Clientid:01:52:54:00:7f:07:35}
I0429 12:02:17.612061  867504 main.go:141] libmachine: (functional-765881) DBG | domain functional-765881 has defined IP address 192.168.39.49 and MAC address 52:54:00:7f:07:35 in network mk-functional-765881
I0429 12:02:17.612215  867504 main.go:141] libmachine: (functional-765881) Calling .GetSSHPort
I0429 12:02:17.612383  867504 main.go:141] libmachine: (functional-765881) Calling .GetSSHKeyPath
I0429 12:02:17.612514  867504 main.go:141] libmachine: (functional-765881) Calling .GetSSHUsername
I0429 12:02:17.612647  867504 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-852552/.minikube/machines/functional-765881/id_rsa Username:docker}
I0429 12:02:17.697389  867504 ssh_runner.go:195] Run: sudo crictl images --output json
I0429 12:02:17.759311  867504 main.go:141] libmachine: Making call to close driver server
I0429 12:02:17.759333  867504 main.go:141] libmachine: (functional-765881) Calling .Close
I0429 12:02:17.759638  867504 main.go:141] libmachine: (functional-765881) DBG | Closing plugin on server side
I0429 12:02:17.759706  867504 main.go:141] libmachine: Successfully made call to close driver server
I0429 12:02:17.759719  867504 main.go:141] libmachine: Making call to close connection to plugin binary
I0429 12:02:17.759732  867504 main.go:141] libmachine: Making call to close driver server
I0429 12:02:17.759742  867504 main.go:141] libmachine: (functional-765881) Calling .Close
I0429 12:02:17.760029  867504 main.go:141] libmachine: (functional-765881) DBG | Closing plugin on server side
I0429 12:02:17.760033  867504 main.go:141] libmachine: Successfully made call to close driver server
I0429 12:02:17.760058  867504 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-765881 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                     | latest             | sha256:7383c2 | 71MB   |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/kube-proxy                  | v1.30.0            | sha256:a0bf55 | 29MB   |
| gcr.io/google-containers/addon-resizer      | functional-765881  | sha256:ffd4cf | 10.8MB |
| registry.k8s.io/kube-scheduler              | v1.30.0            | sha256:259c82 | 19.2MB |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| docker.io/kindest/kindnetd                  | v20240202-8f1494ea | sha256:4950bb | 27.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:cbb01a | 18.2MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| docker.io/library/minikube-local-cache-test | functional-765881  | sha256:93e5d0 | 991B   |
| registry.k8s.io/etcd                        | 3.5.12-0           | sha256:3861cf | 57.2MB |
| registry.k8s.io/kube-apiserver              | v1.30.0            | sha256:c42f13 | 32.7MB |
| registry.k8s.io/kube-controller-manager     | v1.30.0            | sha256:c7aad4 | 31MB   |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-765881 image ls --format table --alsologtostderr:
I0429 12:02:18.129374  867627 out.go:291] Setting OutFile to fd 1 ...
I0429 12:02:18.129672  867627 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 12:02:18.129684  867627 out.go:304] Setting ErrFile to fd 2...
I0429 12:02:18.129690  867627 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 12:02:18.129882  867627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-852552/.minikube/bin
I0429 12:02:18.130487  867627 config.go:182] Loaded profile config "functional-765881": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0429 12:02:18.130605  867627 config.go:182] Loaded profile config "functional-765881": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0429 12:02:18.131013  867627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0429 12:02:18.131066  867627 main.go:141] libmachine: Launching plugin server for driver kvm2
I0429 12:02:18.146075  867627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37211
I0429 12:02:18.146610  867627 main.go:141] libmachine: () Calling .GetVersion
I0429 12:02:18.147196  867627 main.go:141] libmachine: Using API Version  1
I0429 12:02:18.147219  867627 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 12:02:18.147570  867627 main.go:141] libmachine: () Calling .GetMachineName
I0429 12:02:18.147798  867627 main.go:141] libmachine: (functional-765881) Calling .GetState
I0429 12:02:18.149497  867627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0429 12:02:18.149541  867627 main.go:141] libmachine: Launching plugin server for driver kvm2
I0429 12:02:18.164967  867627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39333
I0429 12:02:18.165493  867627 main.go:141] libmachine: () Calling .GetVersion
I0429 12:02:18.166187  867627 main.go:141] libmachine: Using API Version  1
I0429 12:02:18.166215  867627 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 12:02:18.166540  867627 main.go:141] libmachine: () Calling .GetMachineName
I0429 12:02:18.166758  867627 main.go:141] libmachine: (functional-765881) Calling .DriverName
I0429 12:02:18.166943  867627 ssh_runner.go:195] Run: systemctl --version
I0429 12:02:18.166968  867627 main.go:141] libmachine: (functional-765881) Calling .GetSSHHostname
I0429 12:02:18.170112  867627 main.go:141] libmachine: (functional-765881) DBG | domain functional-765881 has defined MAC address 52:54:00:7f:07:35 in network mk-functional-765881
I0429 12:02:18.170580  867627 main.go:141] libmachine: (functional-765881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:07:35", ip: ""} in network mk-functional-765881: {Iface:virbr1 ExpiryTime:2024-04-29 12:59:29 +0000 UTC Type:0 Mac:52:54:00:7f:07:35 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:functional-765881 Clientid:01:52:54:00:7f:07:35}
I0429 12:02:18.170615  867627 main.go:141] libmachine: (functional-765881) DBG | domain functional-765881 has defined IP address 192.168.39.49 and MAC address 52:54:00:7f:07:35 in network mk-functional-765881
I0429 12:02:18.170793  867627 main.go:141] libmachine: (functional-765881) Calling .GetSSHPort
I0429 12:02:18.171050  867627 main.go:141] libmachine: (functional-765881) Calling .GetSSHKeyPath
I0429 12:02:18.171258  867627 main.go:141] libmachine: (functional-765881) Calling .GetSSHUsername
I0429 12:02:18.171442  867627 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-852552/.minikube/machines/functional-765881/id_rsa Username:docker}
I0429 12:02:18.295405  867627 ssh_runner.go:195] Run: sudo crictl images --output json
I0429 12:02:18.353668  867627 main.go:141] libmachine: Making call to close driver server
I0429 12:02:18.353690  867627 main.go:141] libmachine: (functional-765881) Calling .Close
I0429 12:02:18.354072  867627 main.go:141] libmachine: (functional-765881) DBG | Closing plugin on server side
I0429 12:02:18.354092  867627 main.go:141] libmachine: Successfully made call to close driver server
I0429 12:02:18.354108  867627 main.go:141] libmachine: Making call to close connection to plugin binary
I0429 12:02:18.354118  867627 main.go:141] libmachine: Making call to close driver server
I0429 12:02:18.354135  867627 main.go:141] libmachine: (functional-765881) Calling .Close
I0429 12:02:18.354417  867627 main.go:141] libmachine: Successfully made call to close driver server
I0429 12:02:18.354443  867627 main.go:141] libmachine: Making call to close connection to plugin binary
I0429 12:02:18.354465  867627 main.go:141] libmachine: (functional-765881) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-765881 image ls --format json --alsologtostderr:
[{"id":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"57236178"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"27755257"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57
d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","repoDigests":["registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.0"],"size":"32663599"},{"id":"sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.0"],"size":"19208660"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297
686"},{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"18182961"},{"id":"sha256:7383c266ef252ad70806f3072ee8e63d2a16d1e6bafa6146a2da867fc7c41759","repoDigests":["docker.io/library/nginx@sha256:ed6d2c43c8fbcd3eaa44c9dab6d94cb346234476230dc1681227aa72d07181ee"],"repoTags":[
"docker.io/library/nginx:latest"],"size":"70991807"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-765881"],"size":"10823156"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.0"],"size":"31030110"},{"id":"sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","repoDigests":["registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"],"repoTags":["registry.
k8s.io/kube-proxy:v1.30.0"],"size":"29020473"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:93e5d00c44fdbc448fd3b6689242081dc8c3313056dc22b8b33ffdd0164a798c","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-765881"],"size":"991"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-765881 image ls --format json --alsologtostderr:
I0429 12:02:17.885341  867563 out.go:291] Setting OutFile to fd 1 ...
I0429 12:02:17.885514  867563 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 12:02:17.885521  867563 out.go:304] Setting ErrFile to fd 2...
I0429 12:02:17.885526  867563 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 12:02:17.885730  867563 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-852552/.minikube/bin
I0429 12:02:17.886358  867563 config.go:182] Loaded profile config "functional-765881": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0429 12:02:17.886502  867563 config.go:182] Loaded profile config "functional-765881": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0429 12:02:17.886945  867563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0429 12:02:17.887004  867563 main.go:141] libmachine: Launching plugin server for driver kvm2
I0429 12:02:17.902878  867563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43127
I0429 12:02:17.903388  867563 main.go:141] libmachine: () Calling .GetVersion
I0429 12:02:17.904025  867563 main.go:141] libmachine: Using API Version  1
I0429 12:02:17.904052  867563 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 12:02:17.904388  867563 main.go:141] libmachine: () Calling .GetMachineName
I0429 12:02:17.904611  867563 main.go:141] libmachine: (functional-765881) Calling .GetState
I0429 12:02:17.906393  867563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0429 12:02:17.906435  867563 main.go:141] libmachine: Launching plugin server for driver kvm2
I0429 12:02:17.921211  867563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42425
I0429 12:02:17.921593  867563 main.go:141] libmachine: () Calling .GetVersion
I0429 12:02:17.922101  867563 main.go:141] libmachine: Using API Version  1
I0429 12:02:17.922125  867563 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 12:02:17.922459  867563 main.go:141] libmachine: () Calling .GetMachineName
I0429 12:02:17.922636  867563 main.go:141] libmachine: (functional-765881) Calling .DriverName
I0429 12:02:17.922835  867563 ssh_runner.go:195] Run: systemctl --version
I0429 12:02:17.922861  867563 main.go:141] libmachine: (functional-765881) Calling .GetSSHHostname
I0429 12:02:17.925293  867563 main.go:141] libmachine: (functional-765881) DBG | domain functional-765881 has defined MAC address 52:54:00:7f:07:35 in network mk-functional-765881
I0429 12:02:17.925698  867563 main.go:141] libmachine: (functional-765881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:07:35", ip: ""} in network mk-functional-765881: {Iface:virbr1 ExpiryTime:2024-04-29 12:59:29 +0000 UTC Type:0 Mac:52:54:00:7f:07:35 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:functional-765881 Clientid:01:52:54:00:7f:07:35}
I0429 12:02:17.925729  867563 main.go:141] libmachine: (functional-765881) DBG | domain functional-765881 has defined IP address 192.168.39.49 and MAC address 52:54:00:7f:07:35 in network mk-functional-765881
I0429 12:02:17.925864  867563 main.go:141] libmachine: (functional-765881) Calling .GetSSHPort
I0429 12:02:17.926024  867563 main.go:141] libmachine: (functional-765881) Calling .GetSSHKeyPath
I0429 12:02:17.926159  867563 main.go:141] libmachine: (functional-765881) Calling .GetSSHUsername
I0429 12:02:17.926283  867563 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-852552/.minikube/machines/functional-765881/id_rsa Username:docker}
I0429 12:02:18.005535  867563 ssh_runner.go:195] Run: sudo crictl images --output json
I0429 12:02:18.064332  867563 main.go:141] libmachine: Making call to close driver server
I0429 12:02:18.064351  867563 main.go:141] libmachine: (functional-765881) Calling .Close
I0429 12:02:18.064641  867563 main.go:141] libmachine: Successfully made call to close driver server
I0429 12:02:18.064668  867563 main.go:141] libmachine: Making call to close connection to plugin binary
I0429 12:02:18.064677  867563 main.go:141] libmachine: Making call to close driver server
I0429 12:02:18.064685  867563 main.go:141] libmachine: (functional-765881) Calling .Close
I0429 12:02:18.064973  867563 main.go:141] libmachine: Successfully made call to close driver server
I0429 12:02:18.065008  867563 main.go:141] libmachine: (functional-765881) DBG | Closing plugin on server side
I0429 12:02:18.065011  867563 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-765881 image ls --format yaml --alsologtostderr:
- id: sha256:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "27755257"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.0
size: "32663599"
- id: sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "57236178"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:93e5d00c44fdbc448fd3b6689242081dc8c3313056dc22b8b33ffdd0164a798c
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-765881
size: "991"
- id: sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "18182961"
- id: sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.0
size: "19208660"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.0
size: "31030110"
- id: sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b
repoDigests:
- registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210
repoTags:
- registry.k8s.io/kube-proxy:v1.30.0
size: "29020473"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:7383c266ef252ad70806f3072ee8e63d2a16d1e6bafa6146a2da867fc7c41759
repoDigests:
- docker.io/library/nginx@sha256:ed6d2c43c8fbcd3eaa44c9dab6d94cb346234476230dc1681227aa72d07181ee
repoTags:
- docker.io/library/nginx:latest
size: "70991807"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-765881
size: "10823156"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-765881 image ls --format yaml --alsologtostderr:
I0429 12:02:17.602332  867516 out.go:291] Setting OutFile to fd 1 ...
I0429 12:02:17.602458  867516 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 12:02:17.602469  867516 out.go:304] Setting ErrFile to fd 2...
I0429 12:02:17.602473  867516 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 12:02:17.602698  867516 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-852552/.minikube/bin
I0429 12:02:17.603301  867516 config.go:182] Loaded profile config "functional-765881": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0429 12:02:17.603405  867516 config.go:182] Loaded profile config "functional-765881": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0429 12:02:17.603763  867516 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0429 12:02:17.603806  867516 main.go:141] libmachine: Launching plugin server for driver kvm2
I0429 12:02:17.621582  867516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35529
I0429 12:02:17.622063  867516 main.go:141] libmachine: () Calling .GetVersion
I0429 12:02:17.622756  867516 main.go:141] libmachine: Using API Version  1
I0429 12:02:17.622792  867516 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 12:02:17.623209  867516 main.go:141] libmachine: () Calling .GetMachineName
I0429 12:02:17.623431  867516 main.go:141] libmachine: (functional-765881) Calling .GetState
I0429 12:02:17.625322  867516 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0429 12:02:17.625368  867516 main.go:141] libmachine: Launching plugin server for driver kvm2
I0429 12:02:17.640995  867516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46167
I0429 12:02:17.641452  867516 main.go:141] libmachine: () Calling .GetVersion
I0429 12:02:17.641943  867516 main.go:141] libmachine: Using API Version  1
I0429 12:02:17.641964  867516 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 12:02:17.642311  867516 main.go:141] libmachine: () Calling .GetMachineName
I0429 12:02:17.642518  867516 main.go:141] libmachine: (functional-765881) Calling .DriverName
I0429 12:02:17.642775  867516 ssh_runner.go:195] Run: systemctl --version
I0429 12:02:17.642804  867516 main.go:141] libmachine: (functional-765881) Calling .GetSSHHostname
I0429 12:02:17.645307  867516 main.go:141] libmachine: (functional-765881) DBG | domain functional-765881 has defined MAC address 52:54:00:7f:07:35 in network mk-functional-765881
I0429 12:02:17.645761  867516 main.go:141] libmachine: (functional-765881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:07:35", ip: ""} in network mk-functional-765881: {Iface:virbr1 ExpiryTime:2024-04-29 12:59:29 +0000 UTC Type:0 Mac:52:54:00:7f:07:35 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:functional-765881 Clientid:01:52:54:00:7f:07:35}
I0429 12:02:17.645795  867516 main.go:141] libmachine: (functional-765881) DBG | domain functional-765881 has defined IP address 192.168.39.49 and MAC address 52:54:00:7f:07:35 in network mk-functional-765881
I0429 12:02:17.646051  867516 main.go:141] libmachine: (functional-765881) Calling .GetSSHPort
I0429 12:02:17.646202  867516 main.go:141] libmachine: (functional-765881) Calling .GetSSHKeyPath
I0429 12:02:17.646368  867516 main.go:141] libmachine: (functional-765881) Calling .GetSSHUsername
I0429 12:02:17.646517  867516 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-852552/.minikube/machines/functional-765881/id_rsa Username:docker}
I0429 12:02:17.748857  867516 ssh_runner.go:195] Run: sudo crictl images --output json
I0429 12:02:17.817684  867516 main.go:141] libmachine: Making call to close driver server
I0429 12:02:17.817699  867516 main.go:141] libmachine: (functional-765881) Calling .Close
I0429 12:02:17.817973  867516 main.go:141] libmachine: Successfully made call to close driver server
I0429 12:02:17.818002  867516 main.go:141] libmachine: Making call to close connection to plugin binary
I0429 12:02:17.818027  867516 main.go:141] libmachine: Making call to close driver server
I0429 12:02:17.818039  867516 main.go:141] libmachine: (functional-765881) Calling .Close
I0429 12:02:17.818043  867516 main.go:141] libmachine: (functional-765881) DBG | Closing plugin on server side
I0429 12:02:17.818261  867516 main.go:141] libmachine: Successfully made call to close driver server
I0429 12:02:17.818276  867516 main.go:141] libmachine: Making call to close connection to plugin binary
I0429 12:02:17.818297  867516 main.go:141] libmachine: (functional-765881) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-765881 ssh pgrep buildkitd: exit status 1 (208.164528ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 image build -t localhost/my-image:functional-765881 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-765881 image build -t localhost/my-image:functional-765881 testdata/build --alsologtostderr: (3.236681016s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-765881 image build -t localhost/my-image:functional-765881 testdata/build --alsologtostderr:
I0429 12:02:18.034616  867603 out.go:291] Setting OutFile to fd 1 ...
I0429 12:02:18.034763  867603 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 12:02:18.034773  867603 out.go:304] Setting ErrFile to fd 2...
I0429 12:02:18.034778  867603 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 12:02:18.035024  867603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-852552/.minikube/bin
I0429 12:02:18.035646  867603 config.go:182] Loaded profile config "functional-765881": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0429 12:02:18.036487  867603 config.go:182] Loaded profile config "functional-765881": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0429 12:02:18.037128  867603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0429 12:02:18.037190  867603 main.go:141] libmachine: Launching plugin server for driver kvm2
I0429 12:02:18.053031  867603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43169
I0429 12:02:18.053578  867603 main.go:141] libmachine: () Calling .GetVersion
I0429 12:02:18.054284  867603 main.go:141] libmachine: Using API Version  1
I0429 12:02:18.054310  867603 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 12:02:18.054734  867603 main.go:141] libmachine: () Calling .GetMachineName
I0429 12:02:18.054965  867603 main.go:141] libmachine: (functional-765881) Calling .GetState
I0429 12:02:18.057228  867603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0429 12:02:18.057273  867603 main.go:141] libmachine: Launching plugin server for driver kvm2
I0429 12:02:18.073689  867603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38957
I0429 12:02:18.074223  867603 main.go:141] libmachine: () Calling .GetVersion
I0429 12:02:18.074858  867603 main.go:141] libmachine: Using API Version  1
I0429 12:02:18.074894  867603 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 12:02:18.075295  867603 main.go:141] libmachine: () Calling .GetMachineName
I0429 12:02:18.075498  867603 main.go:141] libmachine: (functional-765881) Calling .DriverName
I0429 12:02:18.075775  867603 ssh_runner.go:195] Run: systemctl --version
I0429 12:02:18.075806  867603 main.go:141] libmachine: (functional-765881) Calling .GetSSHHostname
I0429 12:02:18.079021  867603 main.go:141] libmachine: (functional-765881) DBG | domain functional-765881 has defined MAC address 52:54:00:7f:07:35 in network mk-functional-765881
I0429 12:02:18.079467  867603 main.go:141] libmachine: (functional-765881) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:07:35", ip: ""} in network mk-functional-765881: {Iface:virbr1 ExpiryTime:2024-04-29 12:59:29 +0000 UTC Type:0 Mac:52:54:00:7f:07:35 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:functional-765881 Clientid:01:52:54:00:7f:07:35}
I0429 12:02:18.079505  867603 main.go:141] libmachine: (functional-765881) DBG | domain functional-765881 has defined IP address 192.168.39.49 and MAC address 52:54:00:7f:07:35 in network mk-functional-765881
I0429 12:02:18.079624  867603 main.go:141] libmachine: (functional-765881) Calling .GetSSHPort
I0429 12:02:18.079827  867603 main.go:141] libmachine: (functional-765881) Calling .GetSSHKeyPath
I0429 12:02:18.079990  867603 main.go:141] libmachine: (functional-765881) Calling .GetSSHUsername
I0429 12:02:18.080177  867603 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-852552/.minikube/machines/functional-765881/id_rsa Username:docker}
I0429 12:02:18.162727  867603 build_images.go:161] Building image from path: /tmp/build.561570889.tar
I0429 12:02:18.162802  867603 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0429 12:02:18.178452  867603 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.561570889.tar
I0429 12:02:18.187364  867603 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.561570889.tar: stat -c "%s %y" /var/lib/minikube/build/build.561570889.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.561570889.tar': No such file or directory
I0429 12:02:18.187402  867603 ssh_runner.go:362] scp /tmp/build.561570889.tar --> /var/lib/minikube/build/build.561570889.tar (3072 bytes)
I0429 12:02:18.228822  867603 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.561570889
I0429 12:02:18.242867  867603 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.561570889 -xf /var/lib/minikube/build/build.561570889.tar
I0429 12:02:18.259804  867603 containerd.go:394] Building image: /var/lib/minikube/build/build.561570889
I0429 12:02:18.259892  867603 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.561570889 --local dockerfile=/var/lib/minikube/build/build.561570889 --output type=image,name=localhost/my-image:functional-765881
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.1s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:08f87e5c87b6d0038fbb110d82e21bdd9e405647e4de1d7c11461d138e61ba72
#8 exporting manifest sha256:08f87e5c87b6d0038fbb110d82e21bdd9e405647e4de1d7c11461d138e61ba72 0.0s done
#8 exporting config sha256:9f0f9572731ccfe6e98ea0ae375ad4c571e5017fa6c7ab38fa721136d3d0f552 0.0s done
#8 naming to localhost/my-image:functional-765881 done
#8 DONE 0.2s
I0429 12:02:21.173194  867603 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.561570889 --local dockerfile=/var/lib/minikube/build/build.561570889 --output type=image,name=localhost/my-image:functional-765881: (2.913263107s)
I0429 12:02:21.173290  867603 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.561570889
I0429 12:02:21.190672  867603 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.561570889.tar
I0429 12:02:21.206287  867603 build_images.go:217] Built localhost/my-image:functional-765881 from /tmp/build.561570889.tar
I0429 12:02:21.206328  867603 build_images.go:133] succeeded building to: functional-765881
I0429 12:02:21.206335  867603 build_images.go:134] failed building to: 
I0429 12:02:21.206367  867603 main.go:141] libmachine: Making call to close driver server
I0429 12:02:21.206384  867603 main.go:141] libmachine: (functional-765881) Calling .Close
I0429 12:02:21.206824  867603 main.go:141] libmachine: Successfully made call to close driver server
I0429 12:02:21.206847  867603 main.go:141] libmachine: Making call to close connection to plugin binary
I0429 12:02:21.206850  867603 main.go:141] libmachine: (functional-765881) DBG | Closing plugin on server side
I0429 12:02:21.206864  867603 main.go:141] libmachine: Making call to close driver server
I0429 12:02:21.206873  867603 main.go:141] libmachine: (functional-765881) Calling .Close
I0429 12:02:21.207206  867603 main.go:141] libmachine: (functional-765881) DBG | Closing plugin on server side
I0429 12:02:21.207213  867603 main.go:141] libmachine: Successfully made call to close driver server
I0429 12:02:21.207236  867603 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-765881
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-765881 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-765881 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-6hzbc" [7d9e72bf-2ee5-48ba-a5c3-d8a0251faecc] Pending
helpers_test.go:344: "hello-node-6d85cfcfd8-6hzbc" [7d9e72bf-2ee5-48ba-a5c3-d8a0251faecc] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-6hzbc" [7d9e72bf-2ee5-48ba-a5c3-d8a0251faecc] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004350643s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 image load --daemon gcr.io/google-containers/addon-resizer:functional-765881 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-765881 image load --daemon gcr.io/google-containers/addon-resizer:functional-765881 --alsologtostderr: (4.260675702s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 image load --daemon gcr.io/google-containers/addon-resizer:functional-765881 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-765881 image load --daemon gcr.io/google-containers/addon-resizer:functional-765881 --alsologtostderr: (2.657931293s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-765881
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 image load --daemon gcr.io/google-containers/addon-resizer:functional-765881 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-765881 image load --daemon gcr.io/google-containers/addon-resizer:functional-765881 --alsologtostderr: (5.041215516s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 service list -o json
functional_test.go:1490: Took "485.541571ms" to run "out/minikube-linux-amd64 -p functional-765881 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.49:31894
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.49:31894
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 update-context --alsologtostderr -v=2
2024/04/29 12:02:17 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "255.177414ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "66.471992ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "339.442876ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "62.430157ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 image save gcr.io/google-containers/addon-resizer:functional-765881 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-765881 image save gcr.io/google-containers/addon-resizer:functional-765881 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.080509101s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-765881 /tmp/TestFunctionalparallelMountCmdany-port1158812854/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1714392125025360991" to /tmp/TestFunctionalparallelMountCmdany-port1158812854/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1714392125025360991" to /tmp/TestFunctionalparallelMountCmdany-port1158812854/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1714392125025360991" to /tmp/TestFunctionalparallelMountCmdany-port1158812854/001/test-1714392125025360991
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-765881 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (312.266531ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 29 12:02 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 29 12:02 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 29 12:02 test-1714392125025360991
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 ssh cat /mount-9p/test-1714392125025360991
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-765881 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [b39fbeac-2144-40e8-a146-db6a39f73c54] Pending
helpers_test.go:344: "busybox-mount" [b39fbeac-2144-40e8-a146-db6a39f73c54] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [b39fbeac-2144-40e8-a146-db6a39f73c54] Running
helpers_test.go:344: "busybox-mount" [b39fbeac-2144-40e8-a146-db6a39f73c54] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [b39fbeac-2144-40e8-a146-db6a39f73c54] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003835516s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-765881 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-765881 /tmp/TestFunctionalparallelMountCmdany-port1158812854/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 image rm gcr.io/google-containers/addon-resizer:functional-765881 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-765881 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (2.029869603s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-765881
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 image save --daemon gcr.io/google-containers/addon-resizer:functional-765881 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-765881
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-765881 /tmp/TestFunctionalparallelMountCmdspecific-port1085164145/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-765881 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (247.033317ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-765881 /tmp/TestFunctionalparallelMountCmdspecific-port1085164145/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-765881 ssh "sudo umount -f /mount-9p": exit status 1 (210.644018ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-765881 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-765881 /tmp/TestFunctionalparallelMountCmdspecific-port1085164145/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-765881 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3490335741/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-765881 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3490335741/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-765881 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3490335741/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-765881 ssh "findmnt -T" /mount1: exit status 1 (281.790182ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-765881 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-765881 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-765881 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3490335741/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-765881 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3490335741/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-765881 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3490335741/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.34s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-765881
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-765881
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-765881
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (192.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-486905 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0429 12:03:03.255147  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/client.crt: no such file or directory
E0429 12:05:19.409926  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/client.crt: no such file or directory
E0429 12:05:47.096160  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-486905 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (3m12.175283801s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (192.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-486905 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-486905 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-486905 -- rollout status deployment/busybox: (2.635938972s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-486905 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-486905 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-486905 -- exec busybox-fc5497c4f-htcds -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-486905 -- exec busybox-fc5497c4f-kwq7l -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-486905 -- exec busybox-fc5497c4f-zwhtr -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-486905 -- exec busybox-fc5497c4f-htcds -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-486905 -- exec busybox-fc5497c4f-kwq7l -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-486905 -- exec busybox-fc5497c4f-zwhtr -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-486905 -- exec busybox-fc5497c4f-htcds -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-486905 -- exec busybox-fc5497c4f-kwq7l -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-486905 -- exec busybox-fc5497c4f-zwhtr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-486905 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-486905 -- exec busybox-fc5497c4f-htcds -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-486905 -- exec busybox-fc5497c4f-htcds -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-486905 -- exec busybox-fc5497c4f-kwq7l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-486905 -- exec busybox-fc5497c4f-kwq7l -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-486905 -- exec busybox-fc5497c4f-zwhtr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-486905 -- exec busybox-fc5497c4f-zwhtr -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (45.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-486905 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-486905 -v=7 --alsologtostderr: (44.944659415s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (45.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-486905 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 cp testdata/cp-test.txt ha-486905:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 ssh -n ha-486905 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 cp ha-486905:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2246064101/001/cp-test_ha-486905.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 ssh -n ha-486905 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 cp ha-486905:/home/docker/cp-test.txt ha-486905-m02:/home/docker/cp-test_ha-486905_ha-486905-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 ssh -n ha-486905 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 ssh -n ha-486905-m02 "sudo cat /home/docker/cp-test_ha-486905_ha-486905-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 cp ha-486905:/home/docker/cp-test.txt ha-486905-m03:/home/docker/cp-test_ha-486905_ha-486905-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 ssh -n ha-486905 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 ssh -n ha-486905-m03 "sudo cat /home/docker/cp-test_ha-486905_ha-486905-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 cp ha-486905:/home/docker/cp-test.txt ha-486905-m04:/home/docker/cp-test_ha-486905_ha-486905-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 ssh -n ha-486905 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 ssh -n ha-486905-m04 "sudo cat /home/docker/cp-test_ha-486905_ha-486905-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 cp testdata/cp-test.txt ha-486905-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 ssh -n ha-486905-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 cp ha-486905-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2246064101/001/cp-test_ha-486905-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 ssh -n ha-486905-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 cp ha-486905-m02:/home/docker/cp-test.txt ha-486905:/home/docker/cp-test_ha-486905-m02_ha-486905.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 ssh -n ha-486905-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 ssh -n ha-486905 "sudo cat /home/docker/cp-test_ha-486905-m02_ha-486905.txt"
E0429 12:06:50.831064  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/functional-765881/client.crt: no such file or directory
E0429 12:06:50.836426  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/functional-765881/client.crt: no such file or directory
E0429 12:06:50.846748  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/functional-765881/client.crt: no such file or directory
E0429 12:06:50.867528  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/functional-765881/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 cp ha-486905-m02:/home/docker/cp-test.txt ha-486905-m03:/home/docker/cp-test_ha-486905-m02_ha-486905-m03.txt
E0429 12:06:50.908223  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/functional-765881/client.crt: no such file or directory
E0429 12:06:50.988544  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/functional-765881/client.crt: no such file or directory
E0429 12:06:51.148967  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/functional-765881/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 ssh -n ha-486905-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 ssh -n ha-486905-m03 "sudo cat /home/docker/cp-test_ha-486905-m02_ha-486905-m03.txt"
E0429 12:06:51.469130  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/functional-765881/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 cp ha-486905-m02:/home/docker/cp-test.txt ha-486905-m04:/home/docker/cp-test_ha-486905-m02_ha-486905-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 ssh -n ha-486905-m02 "sudo cat /home/docker/cp-test.txt"
E0429 12:06:52.110369  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/functional-765881/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 ssh -n ha-486905-m04 "sudo cat /home/docker/cp-test_ha-486905-m02_ha-486905-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 cp testdata/cp-test.txt ha-486905-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 ssh -n ha-486905-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 cp ha-486905-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2246064101/001/cp-test_ha-486905-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 ssh -n ha-486905-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 cp ha-486905-m03:/home/docker/cp-test.txt ha-486905:/home/docker/cp-test_ha-486905-m03_ha-486905.txt
E0429 12:06:53.391162  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/functional-765881/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 ssh -n ha-486905-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 ssh -n ha-486905 "sudo cat /home/docker/cp-test_ha-486905-m03_ha-486905.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 cp ha-486905-m03:/home/docker/cp-test.txt ha-486905-m02:/home/docker/cp-test_ha-486905-m03_ha-486905-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 ssh -n ha-486905-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 ssh -n ha-486905-m02 "sudo cat /home/docker/cp-test_ha-486905-m03_ha-486905-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 cp ha-486905-m03:/home/docker/cp-test.txt ha-486905-m04:/home/docker/cp-test_ha-486905-m03_ha-486905-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 ssh -n ha-486905-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 ssh -n ha-486905-m04 "sudo cat /home/docker/cp-test_ha-486905-m03_ha-486905-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 cp testdata/cp-test.txt ha-486905-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 ssh -n ha-486905-m04 "sudo cat /home/docker/cp-test.txt"
E0429 12:06:55.951943  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/functional-765881/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 cp ha-486905-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2246064101/001/cp-test_ha-486905-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 ssh -n ha-486905-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 cp ha-486905-m04:/home/docker/cp-test.txt ha-486905:/home/docker/cp-test_ha-486905-m04_ha-486905.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 ssh -n ha-486905-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 ssh -n ha-486905 "sudo cat /home/docker/cp-test_ha-486905-m04_ha-486905.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 cp ha-486905-m04:/home/docker/cp-test.txt ha-486905-m02:/home/docker/cp-test_ha-486905-m04_ha-486905-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 ssh -n ha-486905-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 ssh -n ha-486905-m02 "sudo cat /home/docker/cp-test_ha-486905-m04_ha-486905-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 cp ha-486905-m04:/home/docker/cp-test.txt ha-486905-m03:/home/docker/cp-test_ha-486905-m04_ha-486905-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 ssh -n ha-486905-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 ssh -n ha-486905-m03 "sudo cat /home/docker/cp-test_ha-486905-m04_ha-486905-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (92.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 node stop m02 -v=7 --alsologtostderr
E0429 12:07:01.072278  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/functional-765881/client.crt: no such file or directory
E0429 12:07:11.312947  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/functional-765881/client.crt: no such file or directory
E0429 12:07:31.793701  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/functional-765881/client.crt: no such file or directory
E0429 12:08:12.754136  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/functional-765881/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-486905 node stop m02 -v=7 --alsologtostderr: (1m31.706970119s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-486905 status -v=7 --alsologtostderr: exit status 7 (682.849977ms)

                                                
                                                
-- stdout --
	ha-486905
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-486905-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-486905-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-486905-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 12:08:30.643433  871824 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:08:30.643562  871824 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:08:30.643570  871824 out.go:304] Setting ErrFile to fd 2...
	I0429 12:08:30.643581  871824 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:08:30.643841  871824 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-852552/.minikube/bin
	I0429 12:08:30.644040  871824 out.go:298] Setting JSON to false
	I0429 12:08:30.644069  871824 mustload.go:65] Loading cluster: ha-486905
	I0429 12:08:30.644208  871824 notify.go:220] Checking for updates...
	I0429 12:08:30.644447  871824 config.go:182] Loaded profile config "ha-486905": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0429 12:08:30.644463  871824 status.go:255] checking status of ha-486905 ...
	I0429 12:08:30.644802  871824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:08:30.644891  871824 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:08:30.662559  871824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34387
	I0429 12:08:30.662995  871824 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:08:30.663573  871824 main.go:141] libmachine: Using API Version  1
	I0429 12:08:30.663596  871824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:08:30.664174  871824 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:08:30.664436  871824 main.go:141] libmachine: (ha-486905) Calling .GetState
	I0429 12:08:30.666339  871824 status.go:330] ha-486905 host status = "Running" (err=<nil>)
	I0429 12:08:30.666359  871824 host.go:66] Checking if "ha-486905" exists ...
	I0429 12:08:30.666789  871824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:08:30.666847  871824 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:08:30.682808  871824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42535
	I0429 12:08:30.683262  871824 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:08:30.683785  871824 main.go:141] libmachine: Using API Version  1
	I0429 12:08:30.683809  871824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:08:30.684166  871824 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:08:30.684404  871824 main.go:141] libmachine: (ha-486905) Calling .GetIP
	I0429 12:08:30.687140  871824 main.go:141] libmachine: (ha-486905) DBG | domain ha-486905 has defined MAC address 52:54:00:10:05:7b in network mk-ha-486905
	I0429 12:08:30.687592  871824 main.go:141] libmachine: (ha-486905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:05:7b", ip: ""} in network mk-ha-486905: {Iface:virbr1 ExpiryTime:2024-04-29 13:02:53 +0000 UTC Type:0 Mac:52:54:00:10:05:7b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-486905 Clientid:01:52:54:00:10:05:7b}
	I0429 12:08:30.687622  871824 main.go:141] libmachine: (ha-486905) DBG | domain ha-486905 has defined IP address 192.168.39.150 and MAC address 52:54:00:10:05:7b in network mk-ha-486905
	I0429 12:08:30.687763  871824 host.go:66] Checking if "ha-486905" exists ...
	I0429 12:08:30.688061  871824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:08:30.688107  871824 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:08:30.703729  871824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43831
	I0429 12:08:30.704183  871824 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:08:30.704724  871824 main.go:141] libmachine: Using API Version  1
	I0429 12:08:30.704753  871824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:08:30.705172  871824 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:08:30.705394  871824 main.go:141] libmachine: (ha-486905) Calling .DriverName
	I0429 12:08:30.705615  871824 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:08:30.705674  871824 main.go:141] libmachine: (ha-486905) Calling .GetSSHHostname
	I0429 12:08:30.708487  871824 main.go:141] libmachine: (ha-486905) DBG | domain ha-486905 has defined MAC address 52:54:00:10:05:7b in network mk-ha-486905
	I0429 12:08:30.708889  871824 main.go:141] libmachine: (ha-486905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:05:7b", ip: ""} in network mk-ha-486905: {Iface:virbr1 ExpiryTime:2024-04-29 13:02:53 +0000 UTC Type:0 Mac:52:54:00:10:05:7b Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-486905 Clientid:01:52:54:00:10:05:7b}
	I0429 12:08:30.708907  871824 main.go:141] libmachine: (ha-486905) DBG | domain ha-486905 has defined IP address 192.168.39.150 and MAC address 52:54:00:10:05:7b in network mk-ha-486905
	I0429 12:08:30.709022  871824 main.go:141] libmachine: (ha-486905) Calling .GetSSHPort
	I0429 12:08:30.709228  871824 main.go:141] libmachine: (ha-486905) Calling .GetSSHKeyPath
	I0429 12:08:30.709375  871824 main.go:141] libmachine: (ha-486905) Calling .GetSSHUsername
	I0429 12:08:30.709545  871824 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-852552/.minikube/machines/ha-486905/id_rsa Username:docker}
	I0429 12:08:30.800385  871824 ssh_runner.go:195] Run: systemctl --version
	I0429 12:08:30.807673  871824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:08:30.824362  871824 kubeconfig.go:125] found "ha-486905" server: "https://192.168.39.254:8443"
	I0429 12:08:30.824392  871824 api_server.go:166] Checking apiserver status ...
	I0429 12:08:30.824431  871824 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 12:08:30.841601  871824 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1168/cgroup
	W0429 12:08:30.854678  871824 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1168/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 12:08:30.854738  871824 ssh_runner.go:195] Run: ls
	I0429 12:08:30.859927  871824 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 12:08:30.867881  871824 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 12:08:30.867917  871824 status.go:422] ha-486905 apiserver status = Running (err=<nil>)
	I0429 12:08:30.867934  871824 status.go:257] ha-486905 status: &{Name:ha-486905 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:08:30.867959  871824 status.go:255] checking status of ha-486905-m02 ...
	I0429 12:08:30.868349  871824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:08:30.868411  871824 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:08:30.883540  871824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39623
	I0429 12:08:30.884040  871824 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:08:30.884515  871824 main.go:141] libmachine: Using API Version  1
	I0429 12:08:30.884538  871824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:08:30.884863  871824 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:08:30.885098  871824 main.go:141] libmachine: (ha-486905-m02) Calling .GetState
	I0429 12:08:30.886592  871824 status.go:330] ha-486905-m02 host status = "Stopped" (err=<nil>)
	I0429 12:08:30.886609  871824 status.go:343] host is not running, skipping remaining checks
	I0429 12:08:30.886623  871824 status.go:257] ha-486905-m02 status: &{Name:ha-486905-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:08:30.886645  871824 status.go:255] checking status of ha-486905-m03 ...
	I0429 12:08:30.886946  871824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:08:30.886999  871824 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:08:30.903280  871824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34261
	I0429 12:08:30.903803  871824 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:08:30.904324  871824 main.go:141] libmachine: Using API Version  1
	I0429 12:08:30.904349  871824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:08:30.904647  871824 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:08:30.904926  871824 main.go:141] libmachine: (ha-486905-m03) Calling .GetState
	I0429 12:08:30.906666  871824 status.go:330] ha-486905-m03 host status = "Running" (err=<nil>)
	I0429 12:08:30.906684  871824 host.go:66] Checking if "ha-486905-m03" exists ...
	I0429 12:08:30.907065  871824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:08:30.907113  871824 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:08:30.922262  871824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36885
	I0429 12:08:30.922813  871824 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:08:30.923324  871824 main.go:141] libmachine: Using API Version  1
	I0429 12:08:30.923355  871824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:08:30.923665  871824 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:08:30.923866  871824 main.go:141] libmachine: (ha-486905-m03) Calling .GetIP
	I0429 12:08:30.926748  871824 main.go:141] libmachine: (ha-486905-m03) DBG | domain ha-486905-m03 has defined MAC address 52:54:00:cf:df:4c in network mk-ha-486905
	I0429 12:08:30.927215  871824 main.go:141] libmachine: (ha-486905-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:df:4c", ip: ""} in network mk-ha-486905: {Iface:virbr1 ExpiryTime:2024-04-29 13:04:56 +0000 UTC Type:0 Mac:52:54:00:cf:df:4c Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-486905-m03 Clientid:01:52:54:00:cf:df:4c}
	I0429 12:08:30.927251  871824 main.go:141] libmachine: (ha-486905-m03) DBG | domain ha-486905-m03 has defined IP address 192.168.39.252 and MAC address 52:54:00:cf:df:4c in network mk-ha-486905
	I0429 12:08:30.927328  871824 host.go:66] Checking if "ha-486905-m03" exists ...
	I0429 12:08:30.927713  871824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:08:30.927763  871824 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:08:30.943021  871824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36101
	I0429 12:08:30.943442  871824 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:08:30.943982  871824 main.go:141] libmachine: Using API Version  1
	I0429 12:08:30.944007  871824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:08:30.944412  871824 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:08:30.944630  871824 main.go:141] libmachine: (ha-486905-m03) Calling .DriverName
	I0429 12:08:30.944849  871824 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:08:30.944871  871824 main.go:141] libmachine: (ha-486905-m03) Calling .GetSSHHostname
	I0429 12:08:30.947616  871824 main.go:141] libmachine: (ha-486905-m03) DBG | domain ha-486905-m03 has defined MAC address 52:54:00:cf:df:4c in network mk-ha-486905
	I0429 12:08:30.948077  871824 main.go:141] libmachine: (ha-486905-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:df:4c", ip: ""} in network mk-ha-486905: {Iface:virbr1 ExpiryTime:2024-04-29 13:04:56 +0000 UTC Type:0 Mac:52:54:00:cf:df:4c Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-486905-m03 Clientid:01:52:54:00:cf:df:4c}
	I0429 12:08:30.948120  871824 main.go:141] libmachine: (ha-486905-m03) DBG | domain ha-486905-m03 has defined IP address 192.168.39.252 and MAC address 52:54:00:cf:df:4c in network mk-ha-486905
	I0429 12:08:30.948330  871824 main.go:141] libmachine: (ha-486905-m03) Calling .GetSSHPort
	I0429 12:08:30.948559  871824 main.go:141] libmachine: (ha-486905-m03) Calling .GetSSHKeyPath
	I0429 12:08:30.948739  871824 main.go:141] libmachine: (ha-486905-m03) Calling .GetSSHUsername
	I0429 12:08:30.948951  871824 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-852552/.minikube/machines/ha-486905-m03/id_rsa Username:docker}
	I0429 12:08:31.035960  871824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:08:31.057242  871824 kubeconfig.go:125] found "ha-486905" server: "https://192.168.39.254:8443"
	I0429 12:08:31.057282  871824 api_server.go:166] Checking apiserver status ...
	I0429 12:08:31.057335  871824 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 12:08:31.073723  871824 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1234/cgroup
	W0429 12:08:31.087798  871824 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1234/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 12:08:31.087860  871824 ssh_runner.go:195] Run: ls
	I0429 12:08:31.092963  871824 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 12:08:31.097715  871824 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 12:08:31.097744  871824 status.go:422] ha-486905-m03 apiserver status = Running (err=<nil>)
	I0429 12:08:31.097757  871824 status.go:257] ha-486905-m03 status: &{Name:ha-486905-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:08:31.097781  871824 status.go:255] checking status of ha-486905-m04 ...
	I0429 12:08:31.098090  871824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:08:31.098145  871824 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:08:31.114055  871824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33673
	I0429 12:08:31.114605  871824 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:08:31.115183  871824 main.go:141] libmachine: Using API Version  1
	I0429 12:08:31.115213  871824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:08:31.115541  871824 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:08:31.115736  871824 main.go:141] libmachine: (ha-486905-m04) Calling .GetState
	I0429 12:08:31.117249  871824 status.go:330] ha-486905-m04 host status = "Running" (err=<nil>)
	I0429 12:08:31.117269  871824 host.go:66] Checking if "ha-486905-m04" exists ...
	I0429 12:08:31.117566  871824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:08:31.117601  871824 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:08:31.133333  871824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36399
	I0429 12:08:31.133833  871824 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:08:31.134297  871824 main.go:141] libmachine: Using API Version  1
	I0429 12:08:31.134320  871824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:08:31.134664  871824 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:08:31.134896  871824 main.go:141] libmachine: (ha-486905-m04) Calling .GetIP
	I0429 12:08:31.137825  871824 main.go:141] libmachine: (ha-486905-m04) DBG | domain ha-486905-m04 has defined MAC address 52:54:00:77:21:5b in network mk-ha-486905
	I0429 12:08:31.138345  871824 main.go:141] libmachine: (ha-486905-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:21:5b", ip: ""} in network mk-ha-486905: {Iface:virbr1 ExpiryTime:2024-04-29 13:06:13 +0000 UTC Type:0 Mac:52:54:00:77:21:5b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-486905-m04 Clientid:01:52:54:00:77:21:5b}
	I0429 12:08:31.138374  871824 main.go:141] libmachine: (ha-486905-m04) DBG | domain ha-486905-m04 has defined IP address 192.168.39.125 and MAC address 52:54:00:77:21:5b in network mk-ha-486905
	I0429 12:08:31.138525  871824 host.go:66] Checking if "ha-486905-m04" exists ...
	I0429 12:08:31.138857  871824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:08:31.138897  871824 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:08:31.155582  871824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34113
	I0429 12:08:31.156065  871824 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:08:31.156657  871824 main.go:141] libmachine: Using API Version  1
	I0429 12:08:31.156681  871824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:08:31.156996  871824 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:08:31.157353  871824 main.go:141] libmachine: (ha-486905-m04) Calling .DriverName
	I0429 12:08:31.157557  871824 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:08:31.157579  871824 main.go:141] libmachine: (ha-486905-m04) Calling .GetSSHHostname
	I0429 12:08:31.160370  871824 main.go:141] libmachine: (ha-486905-m04) DBG | domain ha-486905-m04 has defined MAC address 52:54:00:77:21:5b in network mk-ha-486905
	I0429 12:08:31.160798  871824 main.go:141] libmachine: (ha-486905-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:21:5b", ip: ""} in network mk-ha-486905: {Iface:virbr1 ExpiryTime:2024-04-29 13:06:13 +0000 UTC Type:0 Mac:52:54:00:77:21:5b Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-486905-m04 Clientid:01:52:54:00:77:21:5b}
	I0429 12:08:31.160826  871824 main.go:141] libmachine: (ha-486905-m04) DBG | domain ha-486905-m04 has defined IP address 192.168.39.125 and MAC address 52:54:00:77:21:5b in network mk-ha-486905
	I0429 12:08:31.160975  871824 main.go:141] libmachine: (ha-486905-m04) Calling .GetSSHPort
	I0429 12:08:31.161215  871824 main.go:141] libmachine: (ha-486905-m04) Calling .GetSSHKeyPath
	I0429 12:08:31.161371  871824 main.go:141] libmachine: (ha-486905-m04) Calling .GetSSHUsername
	I0429 12:08:31.161506  871824 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-852552/.minikube/machines/ha-486905-m04/id_rsa Username:docker}
	I0429 12:08:31.243107  871824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:08:31.258838  871824 status.go:257] ha-486905-m04 status: &{Name:ha-486905-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (92.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (43.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-486905 node start m02 -v=7 --alsologtostderr: (42.333717065s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (43.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (478.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-486905 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-486905 -v=7 --alsologtostderr
E0429 12:09:34.675103  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/functional-765881/client.crt: no such file or directory
E0429 12:10:19.409955  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/client.crt: no such file or directory
E0429 12:11:50.831545  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/functional-765881/client.crt: no such file or directory
E0429 12:12:18.516073  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/functional-765881/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-486905 -v=7 --alsologtostderr: (4m36.279919951s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-486905 --wait=true -v=7 --alsologtostderr
E0429 12:15:19.410235  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/client.crt: no such file or directory
E0429 12:16:42.456489  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/client.crt: no such file or directory
E0429 12:16:50.830905  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/functional-765881/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-486905 --wait=true -v=7 --alsologtostderr: (3m22.236778187s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-486905
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (478.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (6.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-486905 node delete m03 -v=7 --alsologtostderr: (6.138842588s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (6.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (274.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 stop -v=7 --alsologtostderr
E0429 12:20:19.410847  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/client.crt: no such file or directory
E0429 12:21:50.831003  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/functional-765881/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-486905 stop -v=7 --alsologtostderr: (4m34.64694944s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-486905 status -v=7 --alsologtostderr: exit status 7 (127.885633ms)

                                                
                                                
-- stdout --
	ha-486905
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-486905-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-486905-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 12:21:56.199871  875352 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:21:56.200021  875352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:21:56.200031  875352 out.go:304] Setting ErrFile to fd 2...
	I0429 12:21:56.200037  875352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:21:56.200267  875352 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-852552/.minikube/bin
	I0429 12:21:56.200496  875352 out.go:298] Setting JSON to false
	I0429 12:21:56.200533  875352 mustload.go:65] Loading cluster: ha-486905
	I0429 12:21:56.200655  875352 notify.go:220] Checking for updates...
	I0429 12:21:56.200968  875352 config.go:182] Loaded profile config "ha-486905": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0429 12:21:56.200988  875352 status.go:255] checking status of ha-486905 ...
	I0429 12:21:56.201445  875352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:21:56.201515  875352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:21:56.225923  875352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42285
	I0429 12:21:56.226428  875352 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:21:56.226974  875352 main.go:141] libmachine: Using API Version  1
	I0429 12:21:56.226998  875352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:21:56.227352  875352 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:21:56.227570  875352 main.go:141] libmachine: (ha-486905) Calling .GetState
	I0429 12:21:56.229146  875352 status.go:330] ha-486905 host status = "Stopped" (err=<nil>)
	I0429 12:21:56.229159  875352 status.go:343] host is not running, skipping remaining checks
	I0429 12:21:56.229165  875352 status.go:257] ha-486905 status: &{Name:ha-486905 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:21:56.229190  875352 status.go:255] checking status of ha-486905-m02 ...
	I0429 12:21:56.229490  875352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:21:56.229535  875352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:21:56.244411  875352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43513
	I0429 12:21:56.244907  875352 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:21:56.245372  875352 main.go:141] libmachine: Using API Version  1
	I0429 12:21:56.245392  875352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:21:56.245742  875352 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:21:56.245942  875352 main.go:141] libmachine: (ha-486905-m02) Calling .GetState
	I0429 12:21:56.247339  875352 status.go:330] ha-486905-m02 host status = "Stopped" (err=<nil>)
	I0429 12:21:56.247356  875352 status.go:343] host is not running, skipping remaining checks
	I0429 12:21:56.247364  875352 status.go:257] ha-486905-m02 status: &{Name:ha-486905-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:21:56.247403  875352 status.go:255] checking status of ha-486905-m04 ...
	I0429 12:21:56.247679  875352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:21:56.247712  875352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:21:56.262463  875352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40437
	I0429 12:21:56.262973  875352 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:21:56.263471  875352 main.go:141] libmachine: Using API Version  1
	I0429 12:21:56.263493  875352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:21:56.263834  875352 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:21:56.264019  875352 main.go:141] libmachine: (ha-486905-m04) Calling .GetState
	I0429 12:21:56.265388  875352 status.go:330] ha-486905-m04 host status = "Stopped" (err=<nil>)
	I0429 12:21:56.265402  875352 status.go:343] host is not running, skipping remaining checks
	I0429 12:21:56.265408  875352 status.go:257] ha-486905-m04 status: &{Name:ha-486905-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (274.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (117.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-486905 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0429 12:23:13.877259  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/functional-765881/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-486905 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m56.475769213s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (117.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (73.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-486905 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-486905 --control-plane -v=7 --alsologtostderr: (1m12.997098996s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-486905 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (73.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.58s)

                                                
                                    
x
+
TestJSONOutput/start/Command (59.02s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-674411 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
E0429 12:25:19.409864  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-674411 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (59.014783414s)
--- PASS: TestJSONOutput/start/Command (59.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-674411 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-674411 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.34s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-674411 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-674411 --output=json --user=testUser: (7.344377374s)
--- PASS: TestJSONOutput/stop/Command (7.34s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-433037 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-433037 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (76.0132ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"746c0b51-723a-4646-8afe-0171438d6e32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-433037] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1b31583e-dc9d-40d3-9c27-1b084e20822d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18773"}}
	{"specversion":"1.0","id":"2535f0ea-b213-405c-95cc-e3b28fa57049","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7f259671-dbad-4f47-b1a9-6e91a78eef6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18773-852552/kubeconfig"}}
	{"specversion":"1.0","id":"550e1c60-0830-4a4e-ba61-8d8766e0299b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18773-852552/.minikube"}}
	{"specversion":"1.0","id":"d38ee661-533a-4cf1-9e67-8cae826bb875","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"5406d961-24b9-4d66-9168-74c5c17b6dd2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1b63ffd4-7025-40df-a4a0-ed1f6744f902","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-433037" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-433037
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (90.09s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-447191 --driver=kvm2  --container-runtime=containerd
E0429 12:26:50.831336  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/functional-765881/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-447191 --driver=kvm2  --container-runtime=containerd: (45.702431459s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-449939 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-449939 --driver=kvm2  --container-runtime=containerd: (41.616507709s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-447191
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-449939
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-449939" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-449939
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-449939: (1.041932415s)
helpers_test.go:175: Cleaning up "first-447191" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-447191
--- PASS: TestMinikubeProfile (90.09s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.78s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-182279 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-182279 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (26.781938762s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.78s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-182279 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-182279 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.27s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-198767 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-198767 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (28.271057163s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-198767 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-198767 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-182279 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-198767 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-198767 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-198767
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-198767: (1.321234913s)
--- PASS: TestMountStart/serial/Stop (1.32s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.46s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-198767
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-198767: (21.457035632s)
--- PASS: TestMountStart/serial/RestartStopped (22.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-198767 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-198767 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (103.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-224754 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0429 12:30:19.410264  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-224754 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m43.063340317s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (103.48s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-224754 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-224754 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-224754 -- rollout status deployment/busybox: (2.218369969s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-224754 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-224754 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-224754 -- exec busybox-fc5497c4f-fnlvj -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-224754 -- exec busybox-fc5497c4f-sw866 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-224754 -- exec busybox-fc5497c4f-fnlvj -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-224754 -- exec busybox-fc5497c4f-sw866 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-224754 -- exec busybox-fc5497c4f-fnlvj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-224754 -- exec busybox-fc5497c4f-sw866 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.83s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-224754 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-224754 -- exec busybox-fc5497c4f-fnlvj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-224754 -- exec busybox-fc5497c4f-fnlvj -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-224754 -- exec busybox-fc5497c4f-sw866 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-224754 -- exec busybox-fc5497c4f-sw866 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (35.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-224754 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-224754 -v 3 --alsologtostderr: (34.638572816s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (35.21s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-224754 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 cp testdata/cp-test.txt multinode-224754:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 ssh -n multinode-224754 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 cp multinode-224754:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1275077438/001/cp-test_multinode-224754.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 ssh -n multinode-224754 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 cp multinode-224754:/home/docker/cp-test.txt multinode-224754-m02:/home/docker/cp-test_multinode-224754_multinode-224754-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 ssh -n multinode-224754 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 ssh -n multinode-224754-m02 "sudo cat /home/docker/cp-test_multinode-224754_multinode-224754-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 cp multinode-224754:/home/docker/cp-test.txt multinode-224754-m03:/home/docker/cp-test_multinode-224754_multinode-224754-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 ssh -n multinode-224754 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 ssh -n multinode-224754-m03 "sudo cat /home/docker/cp-test_multinode-224754_multinode-224754-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 cp testdata/cp-test.txt multinode-224754-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 ssh -n multinode-224754-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 cp multinode-224754-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1275077438/001/cp-test_multinode-224754-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 ssh -n multinode-224754-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 cp multinode-224754-m02:/home/docker/cp-test.txt multinode-224754:/home/docker/cp-test_multinode-224754-m02_multinode-224754.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 ssh -n multinode-224754-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 ssh -n multinode-224754 "sudo cat /home/docker/cp-test_multinode-224754-m02_multinode-224754.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 cp multinode-224754-m02:/home/docker/cp-test.txt multinode-224754-m03:/home/docker/cp-test_multinode-224754-m02_multinode-224754-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 ssh -n multinode-224754-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 ssh -n multinode-224754-m03 "sudo cat /home/docker/cp-test_multinode-224754-m02_multinode-224754-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 cp testdata/cp-test.txt multinode-224754-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 ssh -n multinode-224754-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 cp multinode-224754-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1275077438/001/cp-test_multinode-224754-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 ssh -n multinode-224754-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 cp multinode-224754-m03:/home/docker/cp-test.txt multinode-224754:/home/docker/cp-test_multinode-224754-m03_multinode-224754.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 ssh -n multinode-224754-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 ssh -n multinode-224754 "sudo cat /home/docker/cp-test_multinode-224754-m03_multinode-224754.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 cp multinode-224754-m03:/home/docker/cp-test.txt multinode-224754-m02:/home/docker/cp-test_multinode-224754-m03_multinode-224754-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 ssh -n multinode-224754-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 ssh -n multinode-224754-m02 "sudo cat /home/docker/cp-test_multinode-224754-m03_multinode-224754-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.57s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-224754 node stop m03: (1.396252152s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-224754 status: exit status 7 (440.00216ms)

                                                
                                                
-- stdout --
	multinode-224754
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-224754-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-224754-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-224754 status --alsologtostderr: exit status 7 (433.072205ms)

                                                
                                                
-- stdout --
	multinode-224754
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-224754-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-224754-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 12:31:47.240475  882112 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:31:47.240745  882112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:31:47.240763  882112 out.go:304] Setting ErrFile to fd 2...
	I0429 12:31:47.240767  882112 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:31:47.241025  882112 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-852552/.minikube/bin
	I0429 12:31:47.241234  882112 out.go:298] Setting JSON to false
	I0429 12:31:47.241267  882112 mustload.go:65] Loading cluster: multinode-224754
	I0429 12:31:47.241397  882112 notify.go:220] Checking for updates...
	I0429 12:31:47.241750  882112 config.go:182] Loaded profile config "multinode-224754": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0429 12:31:47.241782  882112 status.go:255] checking status of multinode-224754 ...
	I0429 12:31:47.242299  882112 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:31:47.242354  882112 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:31:47.258346  882112 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42481
	I0429 12:31:47.258808  882112 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:31:47.259389  882112 main.go:141] libmachine: Using API Version  1
	I0429 12:31:47.259414  882112 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:31:47.259804  882112 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:31:47.260050  882112 main.go:141] libmachine: (multinode-224754) Calling .GetState
	I0429 12:31:47.261656  882112 status.go:330] multinode-224754 host status = "Running" (err=<nil>)
	I0429 12:31:47.261677  882112 host.go:66] Checking if "multinode-224754" exists ...
	I0429 12:31:47.262068  882112 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:31:47.262114  882112 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:31:47.277222  882112 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38907
	I0429 12:31:47.277712  882112 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:31:47.278245  882112 main.go:141] libmachine: Using API Version  1
	I0429 12:31:47.278273  882112 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:31:47.278591  882112 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:31:47.278785  882112 main.go:141] libmachine: (multinode-224754) Calling .GetIP
	I0429 12:31:47.281415  882112 main.go:141] libmachine: (multinode-224754) DBG | domain multinode-224754 has defined MAC address 52:54:00:cc:67:dc in network mk-multinode-224754
	I0429 12:31:47.281933  882112 main.go:141] libmachine: (multinode-224754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:67:dc", ip: ""} in network mk-multinode-224754: {Iface:virbr1 ExpiryTime:2024-04-29 13:29:28 +0000 UTC Type:0 Mac:52:54:00:cc:67:dc Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:multinode-224754 Clientid:01:52:54:00:cc:67:dc}
	I0429 12:31:47.281975  882112 main.go:141] libmachine: (multinode-224754) DBG | domain multinode-224754 has defined IP address 192.168.39.177 and MAC address 52:54:00:cc:67:dc in network mk-multinode-224754
	I0429 12:31:47.282139  882112 host.go:66] Checking if "multinode-224754" exists ...
	I0429 12:31:47.282481  882112 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:31:47.282521  882112 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:31:47.297874  882112 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39077
	I0429 12:31:47.298258  882112 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:31:47.298739  882112 main.go:141] libmachine: Using API Version  1
	I0429 12:31:47.298759  882112 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:31:47.299084  882112 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:31:47.299322  882112 main.go:141] libmachine: (multinode-224754) Calling .DriverName
	I0429 12:31:47.299546  882112 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:31:47.299574  882112 main.go:141] libmachine: (multinode-224754) Calling .GetSSHHostname
	I0429 12:31:47.301900  882112 main.go:141] libmachine: (multinode-224754) DBG | domain multinode-224754 has defined MAC address 52:54:00:cc:67:dc in network mk-multinode-224754
	I0429 12:31:47.302290  882112 main.go:141] libmachine: (multinode-224754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:67:dc", ip: ""} in network mk-multinode-224754: {Iface:virbr1 ExpiryTime:2024-04-29 13:29:28 +0000 UTC Type:0 Mac:52:54:00:cc:67:dc Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:multinode-224754 Clientid:01:52:54:00:cc:67:dc}
	I0429 12:31:47.302317  882112 main.go:141] libmachine: (multinode-224754) DBG | domain multinode-224754 has defined IP address 192.168.39.177 and MAC address 52:54:00:cc:67:dc in network mk-multinode-224754
	I0429 12:31:47.302470  882112 main.go:141] libmachine: (multinode-224754) Calling .GetSSHPort
	I0429 12:31:47.302628  882112 main.go:141] libmachine: (multinode-224754) Calling .GetSSHKeyPath
	I0429 12:31:47.302755  882112 main.go:141] libmachine: (multinode-224754) Calling .GetSSHUsername
	I0429 12:31:47.302918  882112 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-852552/.minikube/machines/multinode-224754/id_rsa Username:docker}
	I0429 12:31:47.385168  882112 ssh_runner.go:195] Run: systemctl --version
	I0429 12:31:47.392262  882112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:31:47.407027  882112 kubeconfig.go:125] found "multinode-224754" server: "https://192.168.39.177:8443"
	I0429 12:31:47.407066  882112 api_server.go:166] Checking apiserver status ...
	I0429 12:31:47.407098  882112 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 12:31:47.421309  882112 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1196/cgroup
	W0429 12:31:47.432099  882112 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1196/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 12:31:47.432188  882112 ssh_runner.go:195] Run: ls
	I0429 12:31:47.436737  882112 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8443/healthz ...
	I0429 12:31:47.441476  882112 api_server.go:279] https://192.168.39.177:8443/healthz returned 200:
	ok
	I0429 12:31:47.441502  882112 status.go:422] multinode-224754 apiserver status = Running (err=<nil>)
	I0429 12:31:47.441514  882112 status.go:257] multinode-224754 status: &{Name:multinode-224754 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:31:47.441531  882112 status.go:255] checking status of multinode-224754-m02 ...
	I0429 12:31:47.441897  882112 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:31:47.441932  882112 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:31:47.458085  882112 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40123
	I0429 12:31:47.458518  882112 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:31:47.459009  882112 main.go:141] libmachine: Using API Version  1
	I0429 12:31:47.459045  882112 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:31:47.459437  882112 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:31:47.459660  882112 main.go:141] libmachine: (multinode-224754-m02) Calling .GetState
	I0429 12:31:47.461225  882112 status.go:330] multinode-224754-m02 host status = "Running" (err=<nil>)
	I0429 12:31:47.461245  882112 host.go:66] Checking if "multinode-224754-m02" exists ...
	I0429 12:31:47.461534  882112 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:31:47.461571  882112 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:31:47.476438  882112 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32823
	I0429 12:31:47.476923  882112 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:31:47.477539  882112 main.go:141] libmachine: Using API Version  1
	I0429 12:31:47.477559  882112 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:31:47.477908  882112 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:31:47.478110  882112 main.go:141] libmachine: (multinode-224754-m02) Calling .GetIP
	I0429 12:31:47.480858  882112 main.go:141] libmachine: (multinode-224754-m02) DBG | domain multinode-224754-m02 has defined MAC address 52:54:00:b0:da:95 in network mk-multinode-224754
	I0429 12:31:47.481276  882112 main.go:141] libmachine: (multinode-224754-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:da:95", ip: ""} in network mk-multinode-224754: {Iface:virbr1 ExpiryTime:2024-04-29 13:30:34 +0000 UTC Type:0 Mac:52:54:00:b0:da:95 Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:multinode-224754-m02 Clientid:01:52:54:00:b0:da:95}
	I0429 12:31:47.481296  882112 main.go:141] libmachine: (multinode-224754-m02) DBG | domain multinode-224754-m02 has defined IP address 192.168.39.119 and MAC address 52:54:00:b0:da:95 in network mk-multinode-224754
	I0429 12:31:47.481399  882112 host.go:66] Checking if "multinode-224754-m02" exists ...
	I0429 12:31:47.481757  882112 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:31:47.481830  882112 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:31:47.496779  882112 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33243
	I0429 12:31:47.497248  882112 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:31:47.497776  882112 main.go:141] libmachine: Using API Version  1
	I0429 12:31:47.497802  882112 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:31:47.498098  882112 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:31:47.498294  882112 main.go:141] libmachine: (multinode-224754-m02) Calling .DriverName
	I0429 12:31:47.498472  882112 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:31:47.498495  882112 main.go:141] libmachine: (multinode-224754-m02) Calling .GetSSHHostname
	I0429 12:31:47.501078  882112 main.go:141] libmachine: (multinode-224754-m02) DBG | domain multinode-224754-m02 has defined MAC address 52:54:00:b0:da:95 in network mk-multinode-224754
	I0429 12:31:47.501508  882112 main.go:141] libmachine: (multinode-224754-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:da:95", ip: ""} in network mk-multinode-224754: {Iface:virbr1 ExpiryTime:2024-04-29 13:30:34 +0000 UTC Type:0 Mac:52:54:00:b0:da:95 Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:multinode-224754-m02 Clientid:01:52:54:00:b0:da:95}
	I0429 12:31:47.501537  882112 main.go:141] libmachine: (multinode-224754-m02) DBG | domain multinode-224754-m02 has defined IP address 192.168.39.119 and MAC address 52:54:00:b0:da:95 in network mk-multinode-224754
	I0429 12:31:47.501688  882112 main.go:141] libmachine: (multinode-224754-m02) Calling .GetSSHPort
	I0429 12:31:47.501836  882112 main.go:141] libmachine: (multinode-224754-m02) Calling .GetSSHKeyPath
	I0429 12:31:47.501990  882112 main.go:141] libmachine: (multinode-224754-m02) Calling .GetSSHUsername
	I0429 12:31:47.502135  882112 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18773-852552/.minikube/machines/multinode-224754-m02/id_rsa Username:docker}
	I0429 12:31:47.581047  882112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:31:47.594982  882112 status.go:257] multinode-224754-m02 status: &{Name:multinode-224754-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:31:47.595041  882112 status.go:255] checking status of multinode-224754-m03 ...
	I0429 12:31:47.595406  882112 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:31:47.595449  882112 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:31:47.612159  882112 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43359
	I0429 12:31:47.612641  882112 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:31:47.613207  882112 main.go:141] libmachine: Using API Version  1
	I0429 12:31:47.613233  882112 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:31:47.613556  882112 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:31:47.613816  882112 main.go:141] libmachine: (multinode-224754-m03) Calling .GetState
	I0429 12:31:47.615467  882112 status.go:330] multinode-224754-m03 host status = "Stopped" (err=<nil>)
	I0429 12:31:47.615482  882112 status.go:343] host is not running, skipping remaining checks
	I0429 12:31:47.615488  882112 status.go:257] multinode-224754-m03 status: &{Name:multinode-224754-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (25.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 node start m03 -v=7 --alsologtostderr
E0429 12:31:50.830755  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/functional-765881/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-224754 node start m03 -v=7 --alsologtostderr: (24.776108929s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (25.42s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (293.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-224754
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-224754
E0429 12:33:22.458006  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/client.crt: no such file or directory
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-224754: (3m4.368377869s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-224754 --wait=true -v=8 --alsologtostderr
E0429 12:35:19.410744  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/client.crt: no such file or directory
E0429 12:36:50.831347  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/functional-765881/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-224754 --wait=true -v=8 --alsologtostderr: (1m49.252994058s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-224754
--- PASS: TestMultiNode/serial/RestartKeepsNodes (293.74s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-224754 node delete m03: (1.832607848s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.38s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (183.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 stop
E0429 12:39:53.878007  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/functional-765881/client.crt: no such file or directory
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-224754 stop: (3m3.091877931s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-224754 status: exit status 7 (105.114641ms)

                                                
                                                
-- stdout --
	multinode-224754
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-224754-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-224754 status --alsologtostderr: exit status 7 (95.230414ms)

                                                
                                                
-- stdout --
	multinode-224754
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-224754-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 12:40:12.411149  884239 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:40:12.411416  884239 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:40:12.411426  884239 out.go:304] Setting ErrFile to fd 2...
	I0429 12:40:12.411431  884239 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:40:12.411620  884239 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-852552/.minikube/bin
	I0429 12:40:12.411822  884239 out.go:298] Setting JSON to false
	I0429 12:40:12.411854  884239 mustload.go:65] Loading cluster: multinode-224754
	I0429 12:40:12.411930  884239 notify.go:220] Checking for updates...
	I0429 12:40:12.412213  884239 config.go:182] Loaded profile config "multinode-224754": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0429 12:40:12.412228  884239 status.go:255] checking status of multinode-224754 ...
	I0429 12:40:12.412636  884239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:40:12.412748  884239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:40:12.427893  884239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40341
	I0429 12:40:12.428352  884239 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:40:12.428903  884239 main.go:141] libmachine: Using API Version  1
	I0429 12:40:12.428927  884239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:40:12.429348  884239 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:40:12.429545  884239 main.go:141] libmachine: (multinode-224754) Calling .GetState
	I0429 12:40:12.431303  884239 status.go:330] multinode-224754 host status = "Stopped" (err=<nil>)
	I0429 12:40:12.431321  884239 status.go:343] host is not running, skipping remaining checks
	I0429 12:40:12.431330  884239 status.go:257] multinode-224754 status: &{Name:multinode-224754 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:40:12.431379  884239 status.go:255] checking status of multinode-224754-m02 ...
	I0429 12:40:12.431797  884239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0429 12:40:12.431854  884239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 12:40:12.446281  884239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43979
	I0429 12:40:12.446704  884239 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:40:12.447119  884239 main.go:141] libmachine: Using API Version  1
	I0429 12:40:12.447137  884239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:40:12.447408  884239 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:40:12.447584  884239 main.go:141] libmachine: (multinode-224754-m02) Calling .GetState
	I0429 12:40:12.448921  884239 status.go:330] multinode-224754-m02 host status = "Stopped" (err=<nil>)
	I0429 12:40:12.448935  884239 status.go:343] host is not running, skipping remaining checks
	I0429 12:40:12.448941  884239 status.go:257] multinode-224754-m02 status: &{Name:multinode-224754-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (183.29s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (79.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-224754 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0429 12:40:19.410307  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-224754 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m19.316140891s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-224754 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (79.86s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-224754
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-224754-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-224754-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (81.820693ms)

                                                
                                                
-- stdout --
	* [multinode-224754-m02] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18773
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18773-852552/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18773-852552/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-224754-m02' is duplicated with machine name 'multinode-224754-m02' in profile 'multinode-224754'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-224754-m03 --driver=kvm2  --container-runtime=containerd
E0429 12:41:50.831160  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/functional-765881/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-224754-m03 --driver=kvm2  --container-runtime=containerd: (42.652151351s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-224754
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-224754: exit status 80 (222.053151ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-224754 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-224754-m03 already exists in multinode-224754-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-224754-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-224754-m03: (1.025320685s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.04s)

                                                
                                    
x
+
TestPreload (228.42s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-621777 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-621777 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m23.350440683s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-621777 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-621777 image pull gcr.io/k8s-minikube/busybox: (1.074977927s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-621777
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-621777: (1m32.438941427s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-621777 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
E0429 12:45:19.410832  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-621777 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (50.254710832s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-621777 image list
helpers_test.go:175: Cleaning up "test-preload-621777" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-621777
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-621777: (1.070831397s)
--- PASS: TestPreload (228.42s)

                                                
                                    
x
+
TestScheduledStopUnix (115.2s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-090515 --memory=2048 --driver=kvm2  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-090515 --memory=2048 --driver=kvm2  --container-runtime=containerd: (43.39389653s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-090515 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-090515 -n scheduled-stop-090515
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-090515 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-090515 --cancel-scheduled
E0429 12:46:50.830733  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/functional-765881/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-090515 -n scheduled-stop-090515
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-090515
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-090515 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-090515
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-090515: exit status 7 (85.20349ms)

                                                
                                                
-- stdout --
	scheduled-stop-090515
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-090515 -n scheduled-stop-090515
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-090515 -n scheduled-stop-090515: exit status 7 (78.969473ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-090515" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-090515
--- PASS: TestScheduledStopUnix (115.20s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (194.55s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2971274872 start -p running-upgrade-506130 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2971274872 start -p running-upgrade-506130 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m11.014055245s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-506130 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0429 12:50:19.410257  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-506130 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m1.828963135s)
helpers_test.go:175: Cleaning up "running-upgrade-506130" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-506130
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-506130: (1.183600634s)
--- PASS: TestRunningBinaryUpgrade (194.55s)

                                                
                                    
x
+
TestKubernetesUpgrade (185.1s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-608998 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-608998 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (59.501801706s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-608998
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-608998: (1.677533911s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-608998 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-608998 status --format={{.Host}}: exit status 7 (89.593046ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-608998 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-608998 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m16.602222892s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-608998 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-608998 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-608998 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (110.302443ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-608998] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18773
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18773-852552/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18773-852552/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-608998
	    minikube start -p kubernetes-upgrade-608998 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6089982 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0, by running:
	    
	    minikube start -p kubernetes-upgrade-608998 --kubernetes-version=v1.30.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-608998 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-608998 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (45.781473181s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-608998" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-608998
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-608998: (1.27664091s)
--- PASS: TestKubernetesUpgrade (185.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-479729 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-479729 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (101.360918ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-479729] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18773
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18773-852552/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18773-852552/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (95.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-479729 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-479729 --driver=kvm2  --container-runtime=containerd: (1m35.676248979s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-479729 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (95.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (46.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-479729 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0429 12:50:02.458587  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-479729 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (44.951912265s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-479729 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-479729 status -o json: exit status 2 (251.256879ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-479729","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-479729
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-479729: (1.048556555s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (46.25s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (166.1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1089005398 start -p stopped-upgrade-855972 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1089005398 start -p stopped-upgrade-855972 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (59.79759553s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1089005398 -p stopped-upgrade-855972 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1089005398 -p stopped-upgrade-855972 stop: (1.45511006s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-855972 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-855972 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m44.850537035s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (166.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (36.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-479729 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-479729 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (36.519248807s)
--- PASS: TestNoKubernetes/serial/Start (36.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-479729 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-479729 "sudo systemctl is-active --quiet service kubelet": exit status 1 (237.196646ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (4.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (3.709550716s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (4.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-479729
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-479729: (2.374112103s)
--- PASS: TestNoKubernetes/serial/Stop (2.37s)

                                                
                                    
x
+
TestPause/serial/Start (62.31s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-295892 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-295892 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m2.31359338s)
--- PASS: TestPause/serial/Start (62.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (47.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-479729 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-479729 --driver=kvm2  --container-runtime=containerd: (47.775739736s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (47.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-447981 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-447981 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (130.950063ms)

                                                
                                                
-- stdout --
	* [false-447981] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18773
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18773-852552/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18773-852552/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 12:51:19.930763  890863 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:51:19.931038  890863 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:51:19.931048  890863 out.go:304] Setting ErrFile to fd 2...
	I0429 12:51:19.931052  890863 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:51:19.931239  890863 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18773-852552/.minikube/bin
	I0429 12:51:19.931836  890863 out.go:298] Setting JSON to false
	I0429 12:51:19.932822  890863 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":9228,"bootTime":1714385852,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 12:51:19.932888  890863 start.go:139] virtualization: kvm guest
	I0429 12:51:19.935538  890863 out.go:177] * [false-447981] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 12:51:19.936905  890863 out.go:177]   - MINIKUBE_LOCATION=18773
	I0429 12:51:19.938239  890863 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 12:51:19.937008  890863 notify.go:220] Checking for updates...
	I0429 12:51:19.940890  890863 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18773-852552/kubeconfig
	I0429 12:51:19.942350  890863 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18773-852552/.minikube
	I0429 12:51:19.943719  890863 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 12:51:19.945177  890863 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 12:51:19.947049  890863 config.go:182] Loaded profile config "NoKubernetes-479729": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I0429 12:51:19.947192  890863 config.go:182] Loaded profile config "pause-295892": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0429 12:51:19.947286  890863 config.go:182] Loaded profile config "stopped-upgrade-855972": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.1
	I0429 12:51:19.947401  890863 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 12:51:19.984733  890863 out.go:177] * Using the kvm2 driver based on user configuration
	I0429 12:51:19.986187  890863 start.go:297] selected driver: kvm2
	I0429 12:51:19.986208  890863 start.go:901] validating driver "kvm2" against <nil>
	I0429 12:51:19.986223  890863 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 12:51:19.988588  890863 out.go:177] 
	W0429 12:51:19.989890  890863 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0429 12:51:19.991063  890863 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-447981 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-447981

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-447981

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-447981

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-447981

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-447981

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-447981

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-447981

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-447981

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-447981

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-447981

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447981"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447981"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447981"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-447981

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447981"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447981"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-447981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-447981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-447981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-447981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-447981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-447981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-447981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-447981" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447981"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447981"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447981"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447981"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447981"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-447981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-447981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-447981" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447981"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447981"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447981"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447981"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447981"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-447981

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447981"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447981"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447981"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447981"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447981"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447981"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447981"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447981"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447981"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447981"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447981"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447981"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447981"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447981"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447981"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447981"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447981"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-447981"

                                                
                                                
----------------------- debugLogs end: false-447981 [took: 3.166778676s] --------------------------------
helpers_test.go:175: Cleaning up "false-447981" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-447981
--- PASS: TestNetworkPlugins/group/false (3.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-479729 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-479729 "sudo systemctl is-active --quiet service kubelet": exit status 1 (221.967283ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (96.54s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-295892 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-295892 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m36.519436586s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (96.54s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-855972
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                    
x
+
TestPause/serial/Pause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-295892 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.74s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-295892 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-295892 --output=json --layout=cluster: exit status 2 (277.776603ms)

                                                
                                                
-- stdout --
	{"Name":"pause-295892","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-295892","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.28s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-295892 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.66s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.83s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-295892 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.83s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.2s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-295892 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-295892 --alsologtostderr -v=5: (1.195292115s)
--- PASS: TestPause/serial/DeletePaused (1.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (178.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-760177 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-760177 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m58.663252396s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (178.66s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (87.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-470098 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-470098 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0: (1m27.517578367s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (87.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (139.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-893781 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-893781 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0: (2m19.320980792s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (139.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-470098 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [450856a2-ca4e-4f14-9052-b748db131ebe] Pending
helpers_test.go:344: "busybox" [450856a2-ca4e-4f14-9052-b748db131ebe] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0429 12:55:19.409890  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/client.crt: no such file or directory
helpers_test.go:344: "busybox" [450856a2-ca4e-4f14-9052-b748db131ebe] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004570239s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-470098 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-470098 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-470098 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-470098 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-470098 --alsologtostderr -v=3: (1m31.852792417s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-893781 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6d678821-230c-4fe3-8355-61fc9500107f] Pending
helpers_test.go:344: "busybox" [6d678821-230c-4fe3-8355-61fc9500107f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0429 12:56:33.878710  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/functional-765881/client.crt: no such file or directory
helpers_test.go:344: "busybox" [6d678821-230c-4fe3-8355-61fc9500107f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004483732s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-893781 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (97.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-814507 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-814507 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0: (1m37.939566474s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (97.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-893781 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-893781 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (92.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-893781 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-893781 --alsologtostderr -v=3: (1m32.502810702s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (92.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-760177 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d3974c53-2aa8-4eaa-afb5-1e7e6070adeb] Pending
helpers_test.go:344: "busybox" [d3974c53-2aa8-4eaa-afb5-1e7e6070adeb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d3974c53-2aa8-4eaa-afb5-1e7e6070adeb] Running
E0429 12:56:50.830696  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/functional-765881/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.005032292s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-760177 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-760177 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-760177 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (91.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-760177 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-760177 --alsologtostderr -v=3: (1m31.764966256s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (91.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-470098 -n embed-certs-470098
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-470098 -n embed-certs-470098: exit status 7 (76.508218ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-470098 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (302.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-470098 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-470098 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0: (5m1.878459985s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-470098 -n embed-certs-470098
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (302.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-893781 -n no-preload-893781
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-893781 -n no-preload-893781: exit status 7 (87.041881ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-893781 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (296.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-893781 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-893781 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0: (4m56.142385643s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-893781 -n no-preload-893781
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (296.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-814507 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b500839d-b9b8-486c-9276-47e4d8da9506] Pending
helpers_test.go:344: "busybox" [b500839d-b9b8-486c-9276-47e4d8da9506] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b500839d-b9b8-486c-9276-47e4d8da9506] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.009683589s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-814507 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-814507 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-814507 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-814507 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-814507 --alsologtostderr -v=3: (1m31.737016354s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-760177 -n old-k8s-version-760177
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-760177 -n old-k8s-version-760177: exit status 7 (99.842299ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-760177 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (458.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-760177 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-760177 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (7m38.479317215s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-760177 -n old-k8s-version-760177
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (458.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-814507 -n default-k8s-diff-port-814507
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-814507 -n default-k8s-diff-port-814507: exit status 7 (87.501914ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-814507 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (295.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-814507 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0
E0429 13:00:19.410265  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/client.crt: no such file or directory
E0429 13:01:50.831625  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/functional-765881/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-814507 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0: (4m55.508804783s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-814507 -n default-k8s-diff-port-814507
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (295.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-kp4bm" [8afdd327-35ca-48b6-8de5-a5eb1407f68a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004216014s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-kp4bm" [8afdd327-35ca-48b6-8de5-a5eb1407f68a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004838002s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-470098 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-470098 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-470098 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-470098 -n embed-certs-470098
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-470098 -n embed-certs-470098: exit status 2 (254.107285ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-470098 -n embed-certs-470098
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-470098 -n embed-certs-470098: exit status 2 (264.225645ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-470098 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-470098 -n embed-certs-470098
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-470098 -n embed-certs-470098
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (58.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-447654 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-447654 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0: (58.862677825s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (58.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-kc6jf" [662182fe-87b5-4573-9f79-fc9e883830c0] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006760211s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-447654 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-447654 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.12004618s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-447654 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-447654 --alsologtostderr -v=3: (2.341925683s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-kc6jf" [662182fe-87b5-4573-9f79-fc9e883830c0] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004490505s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-893781 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-447654 -n newest-cni-447654
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-447654 -n newest-cni-447654: exit status 7 (79.119895ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-447654 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (33.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-447654 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-447654 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0: (33.57829436s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-447654 -n newest-cni-447654
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (33.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-893781 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-893781 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-893781 -n no-preload-893781
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-893781 -n no-preload-893781: exit status 2 (272.131859ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-893781 -n no-preload-893781
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-893781 -n no-preload-893781: exit status 2 (273.004018ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-893781 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-893781 -n no-preload-893781
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-893781 -n no-preload-893781
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (84.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-447981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-447981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (1m24.578202778s)
--- PASS: TestNetworkPlugins/group/auto/Start (84.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-447654 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-447654 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-447654 -n newest-cni-447654
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-447654 -n newest-cni-447654: exit status 2 (266.078435ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-447654 -n newest-cni-447654
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-447654 -n newest-cni-447654: exit status 2 (261.644358ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-447654 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-447654 -n newest-cni-447654
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-447654 -n newest-cni-447654
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (101.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-447981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-447981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m41.129934216s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (101.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-447981 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-447981 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-rnp86" [0a696665-124e-4f28-a52b-829b57c3a7bb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-rnp86" [0a696665-124e-4f28-a52b-829b57c3a7bb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.006808353s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-fjs7r" [4d704c60-9663-47ea-ab33-2c9d567106e2] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004383731s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-fjs7r" [4d704c60-9663-47ea-ab33-2c9d567106e2] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004770053s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-814507 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-447981 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-447981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-447981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-814507 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-814507 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-814507 -n default-k8s-diff-port-814507
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-814507 -n default-k8s-diff-port-814507: exit status 2 (277.587703ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-814507 -n default-k8s-diff-port-814507
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-814507 -n default-k8s-diff-port-814507: exit status 2 (296.136009ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-814507 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-814507 -n default-k8s-diff-port-814507
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-814507 -n default-k8s-diff-port-814507
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (91.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-447981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-447981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m31.014187604s)
--- PASS: TestNetworkPlugins/group/calico/Start (91.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (104.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-447981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
E0429 13:05:19.410386  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-447981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m44.470960976s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (104.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-qfr4p" [3a44034e-44d0-42be-b977-5e420bd05853] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005645296s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-447981 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-447981 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-5ccg7" [88df3b1f-0995-42cf-b7a1-5d48a0fd30e8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-5ccg7" [88df3b1f-0995-42cf-b7a1-5d48a0fd30e8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004249081s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-447981 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-447981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-447981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-dn2vp" [6e35b54f-970d-4588-9387-3b8de6821129] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.008048885s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-dn2vp" [6e35b54f-970d-4588-9387-3b8de6821129] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005601833s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-760177 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (72.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-447981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-447981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m12.802229977s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (72.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-760177 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-760177 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-760177 -n old-k8s-version-760177
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-760177 -n old-k8s-version-760177: exit status 2 (345.607748ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-760177 -n old-k8s-version-760177
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-760177 -n old-k8s-version-760177: exit status 2 (325.842048ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-760177 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-760177 -n old-k8s-version-760177
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-760177 -n old-k8s-version-760177
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (96.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-447981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
E0429 13:06:32.353277  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/no-preload-893781/client.crt: no such file or directory
E0429 13:06:32.358612  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/no-preload-893781/client.crt: no such file or directory
E0429 13:06:32.368980  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/no-preload-893781/client.crt: no such file or directory
E0429 13:06:32.389298  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/no-preload-893781/client.crt: no such file or directory
E0429 13:06:32.429722  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/no-preload-893781/client.crt: no such file or directory
E0429 13:06:32.510063  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/no-preload-893781/client.crt: no such file or directory
E0429 13:06:32.670433  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/no-preload-893781/client.crt: no such file or directory
E0429 13:06:32.995826  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/no-preload-893781/client.crt: no such file or directory
E0429 13:06:33.636779  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/no-preload-893781/client.crt: no such file or directory
E0429 13:06:34.918024  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/no-preload-893781/client.crt: no such file or directory
E0429 13:06:37.478844  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/no-preload-893781/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-447981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m36.381883286s)
--- PASS: TestNetworkPlugins/group/flannel/Start (96.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-j4gwc" [3f490c4f-d279-4e7a-892b-012e319b88cd] Running
E0429 13:06:42.459010  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/addons-399337/client.crt: no such file or directory
E0429 13:06:42.599353  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/no-preload-893781/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006998861s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-447981 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-447981 replace --force -f testdata/netcat-deployment.yaml
E0429 13:06:47.669386  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/old-k8s-version-760177/client.crt: no such file or directory
E0429 13:06:47.675567  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/old-k8s-version-760177/client.crt: no such file or directory
E0429 13:06:47.686380  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/old-k8s-version-760177/client.crt: no such file or directory
E0429 13:06:47.706781  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/old-k8s-version-760177/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
E0429 13:06:47.747056  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/old-k8s-version-760177/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-2h27v" [a4609af6-cca2-4103-9439-ca2dbabf7793] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0429 13:06:47.827743  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/old-k8s-version-760177/client.crt: no such file or directory
E0429 13:06:47.988131  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/old-k8s-version-760177/client.crt: no such file or directory
E0429 13:06:48.309078  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/old-k8s-version-760177/client.crt: no such file or directory
E0429 13:06:48.950036  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/old-k8s-version-760177/client.crt: no such file or directory
E0429 13:06:50.230554  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/old-k8s-version-760177/client.crt: no such file or directory
E0429 13:06:50.831649  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/functional-765881/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-2h27v" [a4609af6-cca2-4103-9439-ca2dbabf7793] Running
E0429 13:06:52.790879  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/old-k8s-version-760177/client.crt: no such file or directory
E0429 13:06:52.840128  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/no-preload-893781/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004651755s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-447981 exec deployment/netcat -- nslookup kubernetes.default
E0429 13:06:57.912042  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/old-k8s-version-760177/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-447981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-447981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-447981 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-447981 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-56zbn" [1947c0ff-2593-4d84-9e80-a25065d65a44] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-56zbn" [1947c0ff-2593-4d84-9e80-a25065d65a44] Running
E0429 13:07:08.153462  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/old-k8s-version-760177/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.005469392s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-447981 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-447981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0429 13:07:13.321031  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/no-preload-893781/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-447981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (101.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-447981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-447981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m41.196620626s)
--- PASS: TestNetworkPlugins/group/bridge/Start (101.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-447981 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-447981 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-mcplb" [c35fe9ea-a8b2-4f00-83d1-f0954c362082] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0429 13:07:28.633661  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/old-k8s-version-760177/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-mcplb" [c35fe9ea-a8b2-4f00-83d1-f0954c362082] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004380739s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (15.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-447981 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context enable-default-cni-447981 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.179946924s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-447981 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (15.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-447981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-447981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-2cqrc" [3e51db92-9c0c-4924-8186-81faf34f5ea4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004939941s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-447981 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-447981 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-tv5n4" [089b5215-d973-453f-9e95-179db77193b6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-tv5n4" [089b5215-d973-453f-9e95-179db77193b6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.005533225s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-447981 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-447981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-447981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-447981 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-447981 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-84kt9" [690f7836-6ea0-4831-b173-3f682c7cc39e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0429 13:08:58.718735  859881 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18773-852552/.minikube/profiles/default-k8s-diff-port-814507/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-84kt9" [690f7836-6ea0-4831-b173-3f682c7cc39e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.005067907s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-447981 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-447981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-447981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (36/325)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.0/cached-images 0
15 TestDownloadOnly/v1.30.0/binaries 0
16 TestDownloadOnly/v1.30.0/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Olm 0
47 TestDockerFlags 0
50 TestDockerEnvContainerd 0
52 TestHyperKitDriverInstallOrUpdate 0
53 TestHyperkitDriverSkipUpgrade 0
104 TestFunctional/parallel/DockerEnv 0
105 TestFunctional/parallel/PodmanEnv 0
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
123 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
125 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
126 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
153 TestGvisorAddon 0
175 TestImageBuild 0
202 TestKicCustomNetwork 0
203 TestKicExistingNetwork 0
204 TestKicCustomSubnet 0
205 TestKicStaticIP 0
237 TestChangeNoneUser 0
240 TestScheduledStopWindows 0
242 TestSkaffold 0
244 TestInsufficientStorage 0
248 TestMissingContainerUpgrade 0
264 TestStartStop/group/disable-driver-mounts 0.16
271 TestNetworkPlugins/group/kubenet 3.58
279 TestNetworkPlugins/group/cilium 3.57
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-977938" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-977938
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-447981 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-447981

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-447981

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-447981

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-447981

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-447981

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-447981

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-447981

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-447981

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-447981

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-447981

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447981"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447981"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447981"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-447981

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447981"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447981"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-447981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-447981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-447981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-447981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-447981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-447981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-447981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-447981" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447981"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447981"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447981"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447981"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447981"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-447981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-447981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-447981" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447981"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447981"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447981"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447981"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447981"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-447981

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447981"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447981"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447981"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447981"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447981"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447981"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447981"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447981"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447981"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447981"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447981"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447981"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447981"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447981"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447981"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447981"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447981"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-447981"

                                                
                                                
----------------------- debugLogs end: kubenet-447981 [took: 3.423992826s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-447981" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-447981
--- SKIP: TestNetworkPlugins/group/kubenet (3.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-447981 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-447981

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-447981

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-447981

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-447981

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-447981

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-447981

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-447981

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-447981

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-447981

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-447981

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447981"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447981"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447981"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-447981

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447981"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447981"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-447981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-447981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-447981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-447981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-447981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-447981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-447981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-447981" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447981"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447981"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447981"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447981"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447981"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-447981

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-447981

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-447981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-447981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-447981

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-447981

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-447981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-447981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-447981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-447981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-447981" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447981"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447981"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447981"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447981"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447981"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-447981

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447981"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447981"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447981"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447981"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447981"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447981"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447981"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447981"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447981"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447981"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447981"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447981"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447981"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447981"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447981"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447981"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447981"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-447981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-447981"

                                                
                                                
----------------------- debugLogs end: cilium-447981 [took: 3.423608853s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-447981" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-447981
--- SKIP: TestNetworkPlugins/group/cilium (3.57s)

                                                
                                    
Copied to clipboard