Test Report: KVM_Linux_containerd 17363

                    
                      9401f4c578044658a0ecc50e70738aa1fc99eff9:2023-10-05:31314
                    
                

Test fail (2/305)

Order failed test Duration
34 TestAddons/parallel/Headlamp 4.82
51 TestErrorSpam/setup 63.3
x
+
TestAddons/parallel/Headlamp (4.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:822: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-127532 --alsologtostderr -v=1
addons_test.go:822: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-127532 --alsologtostderr -v=1: exit status 11 (1.068289706s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1005 20:06:41.516575  205466 out.go:296] Setting OutFile to fd 1 ...
	I1005 20:06:41.516932  205466 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:06:41.516947  205466 out.go:309] Setting ErrFile to fd 2...
	I1005 20:06:41.516955  205466 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:06:41.517261  205466 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-196818/.minikube/bin
	I1005 20:06:41.517832  205466 mustload.go:65] Loading cluster: addons-127532
	I1005 20:06:41.518419  205466 config.go:182] Loaded profile config "addons-127532": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
	I1005 20:06:41.518467  205466 addons.go:594] checking whether the cluster is paused
	I1005 20:06:41.518621  205466 config.go:182] Loaded profile config "addons-127532": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
	I1005 20:06:41.518647  205466 host.go:66] Checking if "addons-127532" exists ...
	I1005 20:06:41.519222  205466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:06:41.519327  205466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:06:41.536960  205466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34297
	I1005 20:06:41.537577  205466 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:06:41.538363  205466 main.go:141] libmachine: Using API Version  1
	I1005 20:06:41.538396  205466 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:06:41.538922  205466 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:06:41.539152  205466 main.go:141] libmachine: (addons-127532) Calling .GetState
	I1005 20:06:41.541186  205466 main.go:141] libmachine: (addons-127532) Calling .DriverName
	I1005 20:06:41.541516  205466 ssh_runner.go:195] Run: systemctl --version
	I1005 20:06:41.541545  205466 main.go:141] libmachine: (addons-127532) Calling .GetSSHHostname
	I1005 20:06:41.544486  205466 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:06:41.544956  205466 main.go:141] libmachine: (addons-127532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:f8:fe", ip: ""} in network mk-addons-127532: {Iface:virbr1 ExpiryTime:2023-10-05 21:04:03 +0000 UTC Type:0 Mac:52:54:00:e0:f8:fe Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-127532 Clientid:01:52:54:00:e0:f8:fe}
	I1005 20:06:41.544999  205466 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined IP address 192.168.39.191 and MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:06:41.545287  205466 main.go:141] libmachine: (addons-127532) Calling .GetSSHPort
	I1005 20:06:41.545505  205466 main.go:141] libmachine: (addons-127532) Calling .GetSSHKeyPath
	I1005 20:06:41.545680  205466 main.go:141] libmachine: (addons-127532) Calling .GetSSHUsername
	I1005 20:06:41.545886  205466 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17363-196818/.minikube/machines/addons-127532/id_rsa Username:docker}
	I1005 20:06:41.767028  205466 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1005 20:06:41.767139  205466 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1005 20:06:42.029497  205466 cri.go:89] found id: "b688bcf0597547826d6d8a844044a1fcdf4411dfa6fd0a912a0ce4e1ba282fdf"
	I1005 20:06:42.029543  205466 cri.go:89] found id: "890d646ff680210c40d0a838fa24f8cf962f8e3cc6e0b8cf82465db6c2bf84a9"
	I1005 20:06:42.029550  205466 cri.go:89] found id: "aa84f02a7c5aa3ee1f24694fa293bf93c6d84fed9a123cfe163019f52fa18038"
	I1005 20:06:42.029556  205466 cri.go:89] found id: "803ff5fa81d57480951877063f3d1c35f70e61971aa1fe9617a4eee76da82dce"
	I1005 20:06:42.029561  205466 cri.go:89] found id: "f716a81d349c6ee75d3a7e40112cc100329dabb23e868f9734a1e103f44f5837"
	I1005 20:06:42.029570  205466 cri.go:89] found id: "4b8d89a0883e6396e1a7ebf04148b051f626d1892a77297763aa54676f8eaca6"
	I1005 20:06:42.029575  205466 cri.go:89] found id: "f3c99d94dc5f71827e6868502ca89475e363246d5772ebda27ca514b51004818"
	I1005 20:06:42.029579  205466 cri.go:89] found id: "cea6df26726a462d3c3ced504e749ef39341ab2946a3442481f64d4ce8f12235"
	I1005 20:06:42.029584  205466 cri.go:89] found id: "a2f2abd3f0b462716b08b55d55d81e40601874c5c77ede2d83d3b1ca477e3b79"
	I1005 20:06:42.029597  205466 cri.go:89] found id: "d782967574d73a629dc102e95f88db5582a888295b6c3c086fa58daa8ab3c7b5"
	I1005 20:06:42.029603  205466 cri.go:89] found id: "d32fa6ab29aeb18d99a2bd7d4232f637fcc6fe3fa467b140e985361b588d338a"
	I1005 20:06:42.029608  205466 cri.go:89] found id: "1c3f3f61bf07cd1098f5600e00ce10ec0136772e778caed1295d7dfb13347747"
	I1005 20:06:42.029615  205466 cri.go:89] found id: "fe470a37a89c59a408195d0fb26da5ef513017635319f433d483005404ecc475"
	I1005 20:06:42.029626  205466 cri.go:89] found id: "c77114a1dcc2304241aa0e3e2ccc474ff92ef13a44d7c07dbba3a5630e742864"
	I1005 20:06:42.029636  205466 cri.go:89] found id: "47bb3532af7dda6d6f48404584d33d777ac559c10e71f66c58f11bd57b4beb52"
	I1005 20:06:42.029641  205466 cri.go:89] found id: "c21d1c73991b4101047bcc513a6442d7aad446d587ecbfed28bce88a5145de42"
	I1005 20:06:42.029647  205466 cri.go:89] found id: "e7196076ec2424635bdbab5f6ad8cc4801bbb3c985c15ba13b2ae7011fdf4e2a"
	I1005 20:06:42.029659  205466 cri.go:89] found id: "7acc5650cb59c003f57c4d5efc042886275bc492435f7922d35821dc205b7119"
	I1005 20:06:42.029670  205466 cri.go:89] found id: "55902ce9be72256c404d7ea303be500c40e8a1aacb9345858cc19717023d93a7"
	I1005 20:06:42.029678  205466 cri.go:89] found id: "0ce6cd93e10aa4dfe56925f84f155e2b521bd5989658e0026b7902356f03b7ad"
	I1005 20:06:42.029683  205466 cri.go:89] found id: "08cdeaff99532ba296326a2927fe09ca25ba8f2c98f9a41c464987d1c6c67d70"
	I1005 20:06:42.029691  205466 cri.go:89] found id: "187b61c6228f6aeb260635d7ce329dc74124c163ccf24a73fd7798ba26589330"
	I1005 20:06:42.029702  205466 cri.go:89] found id: ""
	I1005 20:06:42.029787  205466 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1005 20:06:42.514004  205466 main.go:141] libmachine: Making call to close driver server
	I1005 20:06:42.514044  205466 main.go:141] libmachine: (addons-127532) Calling .Close
	I1005 20:06:42.514530  205466 main.go:141] libmachine: Successfully made call to close driver server
	I1005 20:06:42.514551  205466 main.go:141] libmachine: Making call to close connection to plugin binary
	I1005 20:06:42.518253  205466 out.go:177] 
	W1005 20:06:42.520483  205466 out.go:239] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-10-05T20:06:42Z" level=error msg="stat /run/containerd/runc/k8s.io/47bb3532af7dda6d6f48404584d33d777ac559c10e71f66c58f11bd57b4beb52: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-10-05T20:06:42Z" level=error msg="stat /run/containerd/runc/k8s.io/47bb3532af7dda6d6f48404584d33d777ac559c10e71f66c58f11bd57b4beb52: no such file or directory"
	
	W1005 20:06:42.520526  205466 out.go:239] * 
	* 
	W1005 20:06:42.524404  205466 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1005 20:06:42.526816  205466 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:824: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-127532 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-127532 -n addons-127532
helpers_test.go:244: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-127532 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-127532 logs -n 25: (2.445549555s)
helpers_test.go:252: TestAddons/parallel/Headlamp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-973200 | jenkins | v1.31.2 | 05 Oct 23 20:02 UTC |                     |
	|         | -p download-only-973200                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-973200 | jenkins | v1.31.2 | 05 Oct 23 20:03 UTC |                     |
	|         | -p download-only-973200                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.31.2 | 05 Oct 23 20:03 UTC | 05 Oct 23 20:03 UTC |
	| delete  | -p download-only-973200                                                                     | download-only-973200 | jenkins | v1.31.2 | 05 Oct 23 20:03 UTC | 05 Oct 23 20:03 UTC |
	| delete  | -p download-only-973200                                                                     | download-only-973200 | jenkins | v1.31.2 | 05 Oct 23 20:03 UTC | 05 Oct 23 20:03 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-677315 | jenkins | v1.31.2 | 05 Oct 23 20:03 UTC |                     |
	|         | binary-mirror-677315                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:33855                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-677315                                                                     | binary-mirror-677315 | jenkins | v1.31.2 | 05 Oct 23 20:03 UTC | 05 Oct 23 20:03 UTC |
	| addons  | disable dashboard -p                                                                        | addons-127532        | jenkins | v1.31.2 | 05 Oct 23 20:03 UTC |                     |
	|         | addons-127532                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-127532        | jenkins | v1.31.2 | 05 Oct 23 20:03 UTC |                     |
	|         | addons-127532                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-127532 --wait=true                                                                | addons-127532        | jenkins | v1.31.2 | 05 Oct 23 20:03 UTC | 05 Oct 23 20:06 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-127532 addons                                                                        | addons-127532        | jenkins | v1.31.2 | 05 Oct 23 20:06 UTC | 05 Oct 23 20:06 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-127532        | jenkins | v1.31.2 | 05 Oct 23 20:06 UTC | 05 Oct 23 20:06 UTC |
	|         | addons-127532                                                                               |                      |         |         |                     |                     |
	| addons  | addons-127532 addons disable                                                                | addons-127532        | jenkins | v1.31.2 | 05 Oct 23 20:06 UTC | 05 Oct 23 20:06 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-127532 ssh cat                                                                       | addons-127532        | jenkins | v1.31.2 | 05 Oct 23 20:06 UTC | 05 Oct 23 20:06 UTC |
	|         | /opt/local-path-provisioner/pvc-dc50f883-7edc-4880-b5eb-ddfd177dce7a_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-127532        | jenkins | v1.31.2 | 05 Oct 23 20:06 UTC |                     |
	|         | -p addons-127532                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-127532 addons disable                                                                | addons-127532        | jenkins | v1.31.2 | 05 Oct 23 20:06 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/05 20:03:45
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1005 20:03:45.745354  204426 out.go:296] Setting OutFile to fd 1 ...
	I1005 20:03:45.745651  204426 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:03:45.745663  204426 out.go:309] Setting ErrFile to fd 2...
	I1005 20:03:45.745668  204426 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:03:45.745919  204426 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-196818/.minikube/bin
	I1005 20:03:45.746773  204426 out.go:303] Setting JSON to false
	I1005 20:03:45.747726  204426 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":20778,"bootTime":1696515448,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1005 20:03:45.747798  204426 start.go:138] virtualization: kvm guest
	I1005 20:03:45.750512  204426 out.go:177] * [addons-127532] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1005 20:03:45.753009  204426 out.go:177]   - MINIKUBE_LOCATION=17363
	I1005 20:03:45.753063  204426 notify.go:220] Checking for updates...
	I1005 20:03:45.755175  204426 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 20:03:45.757303  204426 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17363-196818/kubeconfig
	I1005 20:03:45.759469  204426 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-196818/.minikube
	I1005 20:03:45.761472  204426 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1005 20:03:45.763289  204426 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1005 20:03:45.765570  204426 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 20:03:45.801143  204426 out.go:177] * Using the kvm2 driver based on user configuration
	I1005 20:03:45.803017  204426 start.go:298] selected driver: kvm2
	I1005 20:03:45.803042  204426 start.go:902] validating driver "kvm2" against <nil>
	I1005 20:03:45.803056  204426 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1005 20:03:45.803903  204426 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 20:03:45.804004  204426 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17363-196818/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1005 20:03:45.820093  204426 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1005 20:03:45.820164  204426 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1005 20:03:45.820367  204426 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1005 20:03:45.820428  204426 cni.go:84] Creating CNI manager for ""
	I1005 20:03:45.820440  204426 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1005 20:03:45.820448  204426 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1005 20:03:45.820457  204426 start_flags.go:321] config:
	{Name:addons-127532 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-127532 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 20:03:45.820582  204426 iso.go:125] acquiring lock: {Name:mk57851d2f6689e37478de1afefefb6b4948072d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 20:03:45.822917  204426 out.go:177] * Starting control plane node addons-127532 in cluster addons-127532
	I1005 20:03:45.824583  204426 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime containerd
	I1005 20:03:45.824646  204426 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17363-196818/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-containerd-overlay2-amd64.tar.lz4
	I1005 20:03:45.824660  204426 cache.go:57] Caching tarball of preloaded images
	I1005 20:03:45.824788  204426 preload.go:174] Found /home/jenkins/minikube-integration/17363-196818/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1005 20:03:45.824800  204426 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on containerd
	I1005 20:03:45.825133  204426 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/config.json ...
	I1005 20:03:45.825155  204426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/config.json: {Name:mk26eb43acba1d99b4e11ea177ad201e37d1a85c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:03:45.825321  204426 start.go:365] acquiring machines lock for addons-127532: {Name:mkaa40900eab873e69ecc21b04086a962e26a6af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1005 20:03:45.825368  204426 start.go:369] acquired machines lock for "addons-127532" in 33.886µs
	I1005 20:03:45.825385  204426 start.go:93] Provisioning new machine with config: &{Name:addons-127532 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:addons-127532 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1005 20:03:45.825446  204426 start.go:125] createHost starting for "" (driver="kvm2")
	I1005 20:03:45.827713  204426 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1005 20:03:45.827915  204426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:03:45.827966  204426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:03:45.844192  204426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45213
	I1005 20:03:45.844900  204426 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:03:45.845665  204426 main.go:141] libmachine: Using API Version  1
	I1005 20:03:45.845694  204426 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:03:45.846088  204426 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:03:45.846317  204426 main.go:141] libmachine: (addons-127532) Calling .GetMachineName
	I1005 20:03:45.846555  204426 main.go:141] libmachine: (addons-127532) Calling .DriverName
	I1005 20:03:45.846731  204426 start.go:159] libmachine.API.Create for "addons-127532" (driver="kvm2")
	I1005 20:03:45.846776  204426 client.go:168] LocalClient.Create starting
	I1005 20:03:45.846818  204426 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17363-196818/.minikube/certs/ca.pem
	I1005 20:03:45.955387  204426 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17363-196818/.minikube/certs/cert.pem
	I1005 20:03:46.156041  204426 main.go:141] libmachine: Running pre-create checks...
	I1005 20:03:46.156071  204426 main.go:141] libmachine: (addons-127532) Calling .PreCreateCheck
	I1005 20:03:46.156720  204426 main.go:141] libmachine: (addons-127532) Calling .GetConfigRaw
	I1005 20:03:46.157400  204426 main.go:141] libmachine: Creating machine...
	I1005 20:03:46.157421  204426 main.go:141] libmachine: (addons-127532) Calling .Create
	I1005 20:03:46.157670  204426 main.go:141] libmachine: (addons-127532) Creating KVM machine...
	I1005 20:03:46.159297  204426 main.go:141] libmachine: (addons-127532) DBG | found existing default KVM network
	I1005 20:03:46.160342  204426 main.go:141] libmachine: (addons-127532) DBG | I1005 20:03:46.160130  204448 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000221210}
	I1005 20:03:46.166739  204426 main.go:141] libmachine: (addons-127532) DBG | trying to create private KVM network mk-addons-127532 192.168.39.0/24...
	I1005 20:03:46.248302  204426 main.go:141] libmachine: (addons-127532) DBG | private KVM network mk-addons-127532 192.168.39.0/24 created
	I1005 20:03:46.248336  204426 main.go:141] libmachine: (addons-127532) DBG | I1005 20:03:46.248282  204448 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17363-196818/.minikube
	I1005 20:03:46.248353  204426 main.go:141] libmachine: (addons-127532) Setting up store path in /home/jenkins/minikube-integration/17363-196818/.minikube/machines/addons-127532 ...
	I1005 20:03:46.248372  204426 main.go:141] libmachine: (addons-127532) Building disk image from file:///home/jenkins/minikube-integration/17363-196818/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I1005 20:03:46.248462  204426 main.go:141] libmachine: (addons-127532) Downloading /home/jenkins/minikube-integration/17363-196818/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17363-196818/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso...
	I1005 20:03:46.487833  204426 main.go:141] libmachine: (addons-127532) DBG | I1005 20:03:46.487649  204448 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17363-196818/.minikube/machines/addons-127532/id_rsa...
	I1005 20:03:46.735213  204426 main.go:141] libmachine: (addons-127532) DBG | I1005 20:03:46.734986  204448 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17363-196818/.minikube/machines/addons-127532/addons-127532.rawdisk...
	I1005 20:03:46.735255  204426 main.go:141] libmachine: (addons-127532) DBG | Writing magic tar header
	I1005 20:03:46.735268  204426 main.go:141] libmachine: (addons-127532) DBG | Writing SSH key tar header
	I1005 20:03:46.735277  204426 main.go:141] libmachine: (addons-127532) DBG | I1005 20:03:46.735127  204448 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17363-196818/.minikube/machines/addons-127532 ...
	I1005 20:03:46.735425  204426 main.go:141] libmachine: (addons-127532) Setting executable bit set on /home/jenkins/minikube-integration/17363-196818/.minikube/machines/addons-127532 (perms=drwx------)
	I1005 20:03:46.735456  204426 main.go:141] libmachine: (addons-127532) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17363-196818/.minikube/machines/addons-127532
	I1005 20:03:46.735470  204426 main.go:141] libmachine: (addons-127532) Setting executable bit set on /home/jenkins/minikube-integration/17363-196818/.minikube/machines (perms=drwxr-xr-x)
	I1005 20:03:46.735499  204426 main.go:141] libmachine: (addons-127532) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17363-196818/.minikube/machines
	I1005 20:03:46.735527  204426 main.go:141] libmachine: (addons-127532) Setting executable bit set on /home/jenkins/minikube-integration/17363-196818/.minikube (perms=drwxr-xr-x)
	I1005 20:03:46.735545  204426 main.go:141] libmachine: (addons-127532) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17363-196818/.minikube
	I1005 20:03:46.735568  204426 main.go:141] libmachine: (addons-127532) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17363-196818
	I1005 20:03:46.735583  204426 main.go:141] libmachine: (addons-127532) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1005 20:03:46.735597  204426 main.go:141] libmachine: (addons-127532) DBG | Checking permissions on dir: /home/jenkins
	I1005 20:03:46.735611  204426 main.go:141] libmachine: (addons-127532) DBG | Checking permissions on dir: /home
	I1005 20:03:46.735629  204426 main.go:141] libmachine: (addons-127532) DBG | Skipping /home - not owner
	I1005 20:03:46.735655  204426 main.go:141] libmachine: (addons-127532) Setting executable bit set on /home/jenkins/minikube-integration/17363-196818 (perms=drwxrwxr-x)
	I1005 20:03:46.735671  204426 main.go:141] libmachine: (addons-127532) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1005 20:03:46.735703  204426 main.go:141] libmachine: (addons-127532) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1005 20:03:46.735782  204426 main.go:141] libmachine: (addons-127532) Creating domain...
	I1005 20:03:46.737008  204426 main.go:141] libmachine: (addons-127532) define libvirt domain using xml: 
	I1005 20:03:46.737033  204426 main.go:141] libmachine: (addons-127532) <domain type='kvm'>
	I1005 20:03:46.737045  204426 main.go:141] libmachine: (addons-127532)   <name>addons-127532</name>
	I1005 20:03:46.737053  204426 main.go:141] libmachine: (addons-127532)   <memory unit='MiB'>4000</memory>
	I1005 20:03:46.737062  204426 main.go:141] libmachine: (addons-127532)   <vcpu>2</vcpu>
	I1005 20:03:46.737069  204426 main.go:141] libmachine: (addons-127532)   <features>
	I1005 20:03:46.737084  204426 main.go:141] libmachine: (addons-127532)     <acpi/>
	I1005 20:03:46.737097  204426 main.go:141] libmachine: (addons-127532)     <apic/>
	I1005 20:03:46.737133  204426 main.go:141] libmachine: (addons-127532)     <pae/>
	I1005 20:03:46.737162  204426 main.go:141] libmachine: (addons-127532)     
	I1005 20:03:46.737172  204426 main.go:141] libmachine: (addons-127532)   </features>
	I1005 20:03:46.737180  204426 main.go:141] libmachine: (addons-127532)   <cpu mode='host-passthrough'>
	I1005 20:03:46.737188  204426 main.go:141] libmachine: (addons-127532)   
	I1005 20:03:46.737197  204426 main.go:141] libmachine: (addons-127532)   </cpu>
	I1005 20:03:46.737207  204426 main.go:141] libmachine: (addons-127532)   <os>
	I1005 20:03:46.737216  204426 main.go:141] libmachine: (addons-127532)     <type>hvm</type>
	I1005 20:03:46.737230  204426 main.go:141] libmachine: (addons-127532)     <boot dev='cdrom'/>
	I1005 20:03:46.737242  204426 main.go:141] libmachine: (addons-127532)     <boot dev='hd'/>
	I1005 20:03:46.737255  204426 main.go:141] libmachine: (addons-127532)     <bootmenu enable='no'/>
	I1005 20:03:46.737278  204426 main.go:141] libmachine: (addons-127532)   </os>
	I1005 20:03:46.737288  204426 main.go:141] libmachine: (addons-127532)   <devices>
	I1005 20:03:46.737298  204426 main.go:141] libmachine: (addons-127532)     <disk type='file' device='cdrom'>
	I1005 20:03:46.737330  204426 main.go:141] libmachine: (addons-127532)       <source file='/home/jenkins/minikube-integration/17363-196818/.minikube/machines/addons-127532/boot2docker.iso'/>
	I1005 20:03:46.737344  204426 main.go:141] libmachine: (addons-127532)       <target dev='hdc' bus='scsi'/>
	I1005 20:03:46.737352  204426 main.go:141] libmachine: (addons-127532)       <readonly/>
	I1005 20:03:46.737360  204426 main.go:141] libmachine: (addons-127532)     </disk>
	I1005 20:03:46.737368  204426 main.go:141] libmachine: (addons-127532)     <disk type='file' device='disk'>
	I1005 20:03:46.737380  204426 main.go:141] libmachine: (addons-127532)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1005 20:03:46.737389  204426 main.go:141] libmachine: (addons-127532)       <source file='/home/jenkins/minikube-integration/17363-196818/.minikube/machines/addons-127532/addons-127532.rawdisk'/>
	I1005 20:03:46.737395  204426 main.go:141] libmachine: (addons-127532)       <target dev='hda' bus='virtio'/>
	I1005 20:03:46.737402  204426 main.go:141] libmachine: (addons-127532)     </disk>
	I1005 20:03:46.737408  204426 main.go:141] libmachine: (addons-127532)     <interface type='network'>
	I1005 20:03:46.737415  204426 main.go:141] libmachine: (addons-127532)       <source network='mk-addons-127532'/>
	I1005 20:03:46.737421  204426 main.go:141] libmachine: (addons-127532)       <model type='virtio'/>
	I1005 20:03:46.737427  204426 main.go:141] libmachine: (addons-127532)     </interface>
	I1005 20:03:46.737432  204426 main.go:141] libmachine: (addons-127532)     <interface type='network'>
	I1005 20:03:46.737457  204426 main.go:141] libmachine: (addons-127532)       <source network='default'/>
	I1005 20:03:46.737479  204426 main.go:141] libmachine: (addons-127532)       <model type='virtio'/>
	I1005 20:03:46.737492  204426 main.go:141] libmachine: (addons-127532)     </interface>
	I1005 20:03:46.737498  204426 main.go:141] libmachine: (addons-127532)     <serial type='pty'>
	I1005 20:03:46.737507  204426 main.go:141] libmachine: (addons-127532)       <target port='0'/>
	I1005 20:03:46.737515  204426 main.go:141] libmachine: (addons-127532)     </serial>
	I1005 20:03:46.737527  204426 main.go:141] libmachine: (addons-127532)     <console type='pty'>
	I1005 20:03:46.737537  204426 main.go:141] libmachine: (addons-127532)       <target type='serial' port='0'/>
	I1005 20:03:46.737547  204426 main.go:141] libmachine: (addons-127532)     </console>
	I1005 20:03:46.737555  204426 main.go:141] libmachine: (addons-127532)     <rng model='virtio'>
	I1005 20:03:46.737619  204426 main.go:141] libmachine: (addons-127532)       <backend model='random'>/dev/random</backend>
	I1005 20:03:46.737641  204426 main.go:141] libmachine: (addons-127532)     </rng>
	I1005 20:03:46.737652  204426 main.go:141] libmachine: (addons-127532)     
	I1005 20:03:46.737678  204426 main.go:141] libmachine: (addons-127532)     
	I1005 20:03:46.737698  204426 main.go:141] libmachine: (addons-127532)   </devices>
	I1005 20:03:46.737709  204426 main.go:141] libmachine: (addons-127532) </domain>
	I1005 20:03:46.737746  204426 main.go:141] libmachine: (addons-127532) 
	I1005 20:03:46.743589  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:cd:1b:01 in network default
	I1005 20:03:46.744556  204426 main.go:141] libmachine: (addons-127532) Ensuring networks are active...
	I1005 20:03:46.744593  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:03:46.745962  204426 main.go:141] libmachine: (addons-127532) Ensuring network default is active
	I1005 20:03:46.746894  204426 main.go:141] libmachine: (addons-127532) Ensuring network mk-addons-127532 is active
	I1005 20:03:46.747803  204426 main.go:141] libmachine: (addons-127532) Getting domain xml...
	I1005 20:03:46.748916  204426 main.go:141] libmachine: (addons-127532) Creating domain...
	I1005 20:03:48.041159  204426 main.go:141] libmachine: (addons-127532) Waiting to get IP...
	I1005 20:03:48.042174  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:03:48.042738  204426 main.go:141] libmachine: (addons-127532) DBG | unable to find current IP address of domain addons-127532 in network mk-addons-127532
	I1005 20:03:48.042807  204426 main.go:141] libmachine: (addons-127532) DBG | I1005 20:03:48.042712  204448 retry.go:31] will retry after 235.160669ms: waiting for machine to come up
	I1005 20:03:48.279780  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:03:48.280573  204426 main.go:141] libmachine: (addons-127532) DBG | unable to find current IP address of domain addons-127532 in network mk-addons-127532
	I1005 20:03:48.280604  204426 main.go:141] libmachine: (addons-127532) DBG | I1005 20:03:48.280518  204448 retry.go:31] will retry after 269.599997ms: waiting for machine to come up
	I1005 20:03:48.552515  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:03:48.553213  204426 main.go:141] libmachine: (addons-127532) DBG | unable to find current IP address of domain addons-127532 in network mk-addons-127532
	I1005 20:03:48.553251  204426 main.go:141] libmachine: (addons-127532) DBG | I1005 20:03:48.553121  204448 retry.go:31] will retry after 294.826875ms: waiting for machine to come up
	I1005 20:03:48.849956  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:03:48.850583  204426 main.go:141] libmachine: (addons-127532) DBG | unable to find current IP address of domain addons-127532 in network mk-addons-127532
	I1005 20:03:48.850617  204426 main.go:141] libmachine: (addons-127532) DBG | I1005 20:03:48.850512  204448 retry.go:31] will retry after 469.128627ms: waiting for machine to come up
	I1005 20:03:49.321223  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:03:49.321769  204426 main.go:141] libmachine: (addons-127532) DBG | unable to find current IP address of domain addons-127532 in network mk-addons-127532
	I1005 20:03:49.321800  204426 main.go:141] libmachine: (addons-127532) DBG | I1005 20:03:49.321687  204448 retry.go:31] will retry after 628.382742ms: waiting for machine to come up
	I1005 20:03:49.951929  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:03:49.952605  204426 main.go:141] libmachine: (addons-127532) DBG | unable to find current IP address of domain addons-127532 in network mk-addons-127532
	I1005 20:03:49.952632  204426 main.go:141] libmachine: (addons-127532) DBG | I1005 20:03:49.952535  204448 retry.go:31] will retry after 910.042657ms: waiting for machine to come up
	I1005 20:03:50.864547  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:03:50.865054  204426 main.go:141] libmachine: (addons-127532) DBG | unable to find current IP address of domain addons-127532 in network mk-addons-127532
	I1005 20:03:50.865087  204426 main.go:141] libmachine: (addons-127532) DBG | I1005 20:03:50.864993  204448 retry.go:31] will retry after 935.17566ms: waiting for machine to come up
	I1005 20:03:51.801623  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:03:51.802106  204426 main.go:141] libmachine: (addons-127532) DBG | unable to find current IP address of domain addons-127532 in network mk-addons-127532
	I1005 20:03:51.802136  204426 main.go:141] libmachine: (addons-127532) DBG | I1005 20:03:51.802051  204448 retry.go:31] will retry after 1.482178281s: waiting for machine to come up
	I1005 20:03:53.285552  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:03:53.286012  204426 main.go:141] libmachine: (addons-127532) DBG | unable to find current IP address of domain addons-127532 in network mk-addons-127532
	I1005 20:03:53.286037  204426 main.go:141] libmachine: (addons-127532) DBG | I1005 20:03:53.285960  204448 retry.go:31] will retry after 1.491633061s: waiting for machine to come up
	I1005 20:03:54.779410  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:03:54.779912  204426 main.go:141] libmachine: (addons-127532) DBG | unable to find current IP address of domain addons-127532 in network mk-addons-127532
	I1005 20:03:54.779950  204426 main.go:141] libmachine: (addons-127532) DBG | I1005 20:03:54.779866  204448 retry.go:31] will retry after 1.794314646s: waiting for machine to come up
	I1005 20:03:56.576242  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:03:56.576684  204426 main.go:141] libmachine: (addons-127532) DBG | unable to find current IP address of domain addons-127532 in network mk-addons-127532
	I1005 20:03:56.576729  204426 main.go:141] libmachine: (addons-127532) DBG | I1005 20:03:56.576626  204448 retry.go:31] will retry after 2.398120684s: waiting for machine to come up
	I1005 20:03:58.978474  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:03:58.978992  204426 main.go:141] libmachine: (addons-127532) DBG | unable to find current IP address of domain addons-127532 in network mk-addons-127532
	I1005 20:03:58.979021  204426 main.go:141] libmachine: (addons-127532) DBG | I1005 20:03:58.978940  204448 retry.go:31] will retry after 3.472344139s: waiting for machine to come up
	I1005 20:04:02.453687  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:04:02.454231  204426 main.go:141] libmachine: (addons-127532) DBG | unable to find current IP address of domain addons-127532 in network mk-addons-127532
	I1005 20:04:02.454263  204426 main.go:141] libmachine: (addons-127532) DBG | I1005 20:04:02.454152  204448 retry.go:31] will retry after 3.725142221s: waiting for machine to come up
	I1005 20:04:06.180723  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:04:06.181342  204426 main.go:141] libmachine: (addons-127532) DBG | unable to find current IP address of domain addons-127532 in network mk-addons-127532
	I1005 20:04:06.181375  204426 main.go:141] libmachine: (addons-127532) DBG | I1005 20:04:06.181282  204448 retry.go:31] will retry after 4.864227225s: waiting for machine to come up
	I1005 20:04:11.047592  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:04:11.048133  204426 main.go:141] libmachine: (addons-127532) Found IP for machine: 192.168.39.191
	I1005 20:04:11.048163  204426 main.go:141] libmachine: (addons-127532) Reserving static IP address...
	I1005 20:04:11.048238  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has current primary IP address 192.168.39.191 and MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:04:11.048866  204426 main.go:141] libmachine: (addons-127532) DBG | unable to find host DHCP lease matching {name: "addons-127532", mac: "52:54:00:e0:f8:fe", ip: "192.168.39.191"} in network mk-addons-127532
	I1005 20:04:11.151092  204426 main.go:141] libmachine: (addons-127532) DBG | Getting to WaitForSSH function...
	I1005 20:04:11.151142  204426 main.go:141] libmachine: (addons-127532) Reserved static IP address: 192.168.39.191
	I1005 20:04:11.151158  204426 main.go:141] libmachine: (addons-127532) Waiting for SSH to be available...
	I1005 20:04:11.155130  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:04:11.155554  204426 main.go:141] libmachine: (addons-127532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:f8:fe", ip: ""} in network mk-addons-127532: {Iface:virbr1 ExpiryTime:2023-10-05 21:04:03 +0000 UTC Type:0 Mac:52:54:00:e0:f8:fe Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e0:f8:fe}
	I1005 20:04:11.155589  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined IP address 192.168.39.191 and MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:04:11.155812  204426 main.go:141] libmachine: (addons-127532) DBG | Using SSH client type: external
	I1005 20:04:11.155841  204426 main.go:141] libmachine: (addons-127532) DBG | Using SSH private key: /home/jenkins/minikube-integration/17363-196818/.minikube/machines/addons-127532/id_rsa (-rw-------)
	I1005 20:04:11.155900  204426 main.go:141] libmachine: (addons-127532) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.191 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17363-196818/.minikube/machines/addons-127532/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1005 20:04:11.155925  204426 main.go:141] libmachine: (addons-127532) DBG | About to run SSH command:
	I1005 20:04:11.155937  204426 main.go:141] libmachine: (addons-127532) DBG | exit 0
	I1005 20:04:11.246987  204426 main.go:141] libmachine: (addons-127532) DBG | SSH cmd err, output: <nil>: 
	I1005 20:04:11.247354  204426 main.go:141] libmachine: (addons-127532) KVM machine creation complete!
	I1005 20:04:11.247778  204426 main.go:141] libmachine: (addons-127532) Calling .GetConfigRaw
	I1005 20:04:11.248642  204426 main.go:141] libmachine: (addons-127532) Calling .DriverName
	I1005 20:04:11.249430  204426 main.go:141] libmachine: (addons-127532) Calling .DriverName
	I1005 20:04:11.249787  204426 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1005 20:04:11.249813  204426 main.go:141] libmachine: (addons-127532) Calling .GetState
	I1005 20:04:11.252699  204426 main.go:141] libmachine: Detecting operating system of created instance...
	I1005 20:04:11.252723  204426 main.go:141] libmachine: Waiting for SSH to be available...
	I1005 20:04:11.252730  204426 main.go:141] libmachine: Getting to WaitForSSH function...
	I1005 20:04:11.252738  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHHostname
	I1005 20:04:11.258818  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:04:11.259900  204426 main.go:141] libmachine: (addons-127532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:f8:fe", ip: ""} in network mk-addons-127532: {Iface:virbr1 ExpiryTime:2023-10-05 21:04:03 +0000 UTC Type:0 Mac:52:54:00:e0:f8:fe Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-127532 Clientid:01:52:54:00:e0:f8:fe}
	I1005 20:04:11.259965  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined IP address 192.168.39.191 and MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:04:11.260184  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHPort
	I1005 20:04:11.260616  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHKeyPath
	I1005 20:04:11.261045  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHKeyPath
	I1005 20:04:11.261379  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHUsername
	I1005 20:04:11.261717  204426 main.go:141] libmachine: Using SSH client type: native
	I1005 20:04:11.262456  204426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1005 20:04:11.262480  204426 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1005 20:04:11.382076  204426 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1005 20:04:11.382113  204426 main.go:141] libmachine: Detecting the provisioner...
	I1005 20:04:11.382126  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHHostname
	I1005 20:04:11.385385  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:04:11.385943  204426 main.go:141] libmachine: (addons-127532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:f8:fe", ip: ""} in network mk-addons-127532: {Iface:virbr1 ExpiryTime:2023-10-05 21:04:03 +0000 UTC Type:0 Mac:52:54:00:e0:f8:fe Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-127532 Clientid:01:52:54:00:e0:f8:fe}
	I1005 20:04:11.385985  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined IP address 192.168.39.191 and MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:04:11.386236  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHPort
	I1005 20:04:11.386484  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHKeyPath
	I1005 20:04:11.386844  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHKeyPath
	I1005 20:04:11.387142  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHUsername
	I1005 20:04:11.387387  204426 main.go:141] libmachine: Using SSH client type: native
	I1005 20:04:11.387712  204426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1005 20:04:11.387724  204426 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1005 20:04:11.511715  204426 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb090841-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1005 20:04:11.511839  204426 main.go:141] libmachine: found compatible host: buildroot
	I1005 20:04:11.511858  204426 main.go:141] libmachine: Provisioning with buildroot...
	I1005 20:04:11.511872  204426 main.go:141] libmachine: (addons-127532) Calling .GetMachineName
	I1005 20:04:11.512184  204426 buildroot.go:166] provisioning hostname "addons-127532"
	I1005 20:04:11.512211  204426 main.go:141] libmachine: (addons-127532) Calling .GetMachineName
	I1005 20:04:11.512481  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHHostname
	I1005 20:04:11.515540  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:04:11.516012  204426 main.go:141] libmachine: (addons-127532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:f8:fe", ip: ""} in network mk-addons-127532: {Iface:virbr1 ExpiryTime:2023-10-05 21:04:03 +0000 UTC Type:0 Mac:52:54:00:e0:f8:fe Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-127532 Clientid:01:52:54:00:e0:f8:fe}
	I1005 20:04:11.516057  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined IP address 192.168.39.191 and MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:04:11.516290  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHPort
	I1005 20:04:11.516531  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHKeyPath
	I1005 20:04:11.516996  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHKeyPath
	I1005 20:04:11.517198  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHUsername
	I1005 20:04:11.517433  204426 main.go:141] libmachine: Using SSH client type: native
	I1005 20:04:11.517804  204426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1005 20:04:11.517822  204426 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-127532 && echo "addons-127532" | sudo tee /etc/hostname
	I1005 20:04:11.651480  204426 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-127532
	
	I1005 20:04:11.651516  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHHostname
	I1005 20:04:11.654631  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:04:11.655016  204426 main.go:141] libmachine: (addons-127532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:f8:fe", ip: ""} in network mk-addons-127532: {Iface:virbr1 ExpiryTime:2023-10-05 21:04:03 +0000 UTC Type:0 Mac:52:54:00:e0:f8:fe Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-127532 Clientid:01:52:54:00:e0:f8:fe}
	I1005 20:04:11.655053  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined IP address 192.168.39.191 and MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:04:11.655305  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHPort
	I1005 20:04:11.655561  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHKeyPath
	I1005 20:04:11.655754  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHKeyPath
	I1005 20:04:11.655868  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHUsername
	I1005 20:04:11.656009  204426 main.go:141] libmachine: Using SSH client type: native
	I1005 20:04:11.656457  204426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1005 20:04:11.656477  204426 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-127532' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-127532/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-127532' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1005 20:04:11.785068  204426 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1005 20:04:11.785106  204426 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17363-196818/.minikube CaCertPath:/home/jenkins/minikube-integration/17363-196818/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17363-196818/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17363-196818/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17363-196818/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17363-196818/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17363-196818/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17363-196818/.minikube}
	I1005 20:04:11.785167  204426 buildroot.go:174] setting up certificates
	I1005 20:04:11.785183  204426 provision.go:83] configureAuth start
	I1005 20:04:11.785200  204426 main.go:141] libmachine: (addons-127532) Calling .GetMachineName
	I1005 20:04:11.785582  204426 main.go:141] libmachine: (addons-127532) Calling .GetIP
	I1005 20:04:11.788955  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:04:11.789507  204426 main.go:141] libmachine: (addons-127532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:f8:fe", ip: ""} in network mk-addons-127532: {Iface:virbr1 ExpiryTime:2023-10-05 21:04:03 +0000 UTC Type:0 Mac:52:54:00:e0:f8:fe Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-127532 Clientid:01:52:54:00:e0:f8:fe}
	I1005 20:04:11.789536  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined IP address 192.168.39.191 and MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:04:11.789730  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHHostname
	I1005 20:04:11.792430  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:04:11.792968  204426 main.go:141] libmachine: (addons-127532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:f8:fe", ip: ""} in network mk-addons-127532: {Iface:virbr1 ExpiryTime:2023-10-05 21:04:03 +0000 UTC Type:0 Mac:52:54:00:e0:f8:fe Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-127532 Clientid:01:52:54:00:e0:f8:fe}
	I1005 20:04:11.792991  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined IP address 192.168.39.191 and MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:04:11.793258  204426 provision.go:138] copyHostCerts
	I1005 20:04:11.793348  204426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-196818/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17363-196818/.minikube/ca.pem (1082 bytes)
	I1005 20:04:11.793507  204426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-196818/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17363-196818/.minikube/cert.pem (1123 bytes)
	I1005 20:04:11.793570  204426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-196818/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17363-196818/.minikube/key.pem (1675 bytes)
	I1005 20:04:11.793647  204426 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17363-196818/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17363-196818/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17363-196818/.minikube/certs/ca-key.pem org=jenkins.addons-127532 san=[192.168.39.191 192.168.39.191 localhost 127.0.0.1 minikube addons-127532]
	I1005 20:04:11.912976  204426 provision.go:172] copyRemoteCerts
	I1005 20:04:11.913060  204426 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1005 20:04:11.913089  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHHostname
	I1005 20:04:11.916322  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:04:11.916772  204426 main.go:141] libmachine: (addons-127532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:f8:fe", ip: ""} in network mk-addons-127532: {Iface:virbr1 ExpiryTime:2023-10-05 21:04:03 +0000 UTC Type:0 Mac:52:54:00:e0:f8:fe Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-127532 Clientid:01:52:54:00:e0:f8:fe}
	I1005 20:04:11.916812  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined IP address 192.168.39.191 and MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:04:11.917054  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHPort
	I1005 20:04:11.917278  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHKeyPath
	I1005 20:04:11.917453  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHUsername
	I1005 20:04:11.917660  204426 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17363-196818/.minikube/machines/addons-127532/id_rsa Username:docker}
	I1005 20:04:12.007919  204426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-196818/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1005 20:04:12.034960  204426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-196818/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1005 20:04:12.068854  204426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-196818/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1005 20:04:12.100390  204426 provision.go:86] duration metric: configureAuth took 315.107754ms
	I1005 20:04:12.100473  204426 buildroot.go:189] setting minikube options for container-runtime
	I1005 20:04:12.100788  204426 config.go:182] Loaded profile config "addons-127532": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
	I1005 20:04:12.100843  204426 main.go:141] libmachine: Checking connection to Docker...
	I1005 20:04:12.100865  204426 main.go:141] libmachine: (addons-127532) Calling .GetURL
	I1005 20:04:12.102959  204426 main.go:141] libmachine: (addons-127532) DBG | Using libvirt version 6000000
	I1005 20:04:12.106169  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:04:12.106563  204426 main.go:141] libmachine: (addons-127532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:f8:fe", ip: ""} in network mk-addons-127532: {Iface:virbr1 ExpiryTime:2023-10-05 21:04:03 +0000 UTC Type:0 Mac:52:54:00:e0:f8:fe Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-127532 Clientid:01:52:54:00:e0:f8:fe}
	I1005 20:04:12.106599  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined IP address 192.168.39.191 and MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:04:12.106902  204426 main.go:141] libmachine: Docker is up and running!
	I1005 20:04:12.106922  204426 main.go:141] libmachine: Reticulating splines...
	I1005 20:04:12.106930  204426 client.go:171] LocalClient.Create took 26.26014527s
	I1005 20:04:12.106957  204426 start.go:167] duration metric: libmachine.API.Create for "addons-127532" took 26.260231048s
	I1005 20:04:12.106968  204426 start.go:300] post-start starting for "addons-127532" (driver="kvm2")
	I1005 20:04:12.106981  204426 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1005 20:04:12.107013  204426 main.go:141] libmachine: (addons-127532) Calling .DriverName
	I1005 20:04:12.107352  204426 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1005 20:04:12.107391  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHHostname
	I1005 20:04:12.112396  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:04:12.112928  204426 main.go:141] libmachine: (addons-127532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:f8:fe", ip: ""} in network mk-addons-127532: {Iface:virbr1 ExpiryTime:2023-10-05 21:04:03 +0000 UTC Type:0 Mac:52:54:00:e0:f8:fe Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-127532 Clientid:01:52:54:00:e0:f8:fe}
	I1005 20:04:12.112974  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined IP address 192.168.39.191 and MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:04:12.113369  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHPort
	I1005 20:04:12.113711  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHKeyPath
	I1005 20:04:12.114099  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHUsername
	I1005 20:04:12.114496  204426 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17363-196818/.minikube/machines/addons-127532/id_rsa Username:docker}
	I1005 20:04:12.204373  204426 ssh_runner.go:195] Run: cat /etc/os-release
	I1005 20:04:12.208849  204426 info.go:137] Remote host: Buildroot 2021.02.12
	I1005 20:04:12.208882  204426 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-196818/.minikube/addons for local assets ...
	I1005 20:04:12.208971  204426 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-196818/.minikube/files for local assets ...
	I1005 20:04:12.208994  204426 start.go:303] post-start completed in 102.019285ms
	I1005 20:04:12.209028  204426 main.go:141] libmachine: (addons-127532) Calling .GetConfigRaw
	I1005 20:04:12.209647  204426 main.go:141] libmachine: (addons-127532) Calling .GetIP
	I1005 20:04:12.212415  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:04:12.212754  204426 main.go:141] libmachine: (addons-127532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:f8:fe", ip: ""} in network mk-addons-127532: {Iface:virbr1 ExpiryTime:2023-10-05 21:04:03 +0000 UTC Type:0 Mac:52:54:00:e0:f8:fe Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-127532 Clientid:01:52:54:00:e0:f8:fe}
	I1005 20:04:12.212784  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined IP address 192.168.39.191 and MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:04:12.213094  204426 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/config.json ...
	I1005 20:04:12.213273  204426 start.go:128] duration metric: createHost completed in 26.387817535s
	I1005 20:04:12.213297  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHHostname
	I1005 20:04:12.215932  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:04:12.216319  204426 main.go:141] libmachine: (addons-127532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:f8:fe", ip: ""} in network mk-addons-127532: {Iface:virbr1 ExpiryTime:2023-10-05 21:04:03 +0000 UTC Type:0 Mac:52:54:00:e0:f8:fe Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-127532 Clientid:01:52:54:00:e0:f8:fe}
	I1005 20:04:12.216355  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined IP address 192.168.39.191 and MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:04:12.216543  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHPort
	I1005 20:04:12.216763  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHKeyPath
	I1005 20:04:12.216966  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHKeyPath
	I1005 20:04:12.217138  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHUsername
	I1005 20:04:12.217294  204426 main.go:141] libmachine: Using SSH client type: native
	I1005 20:04:12.217613  204426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1005 20:04:12.217624  204426 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1005 20:04:12.335536  204426 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696536252.314816178
	
	I1005 20:04:12.335568  204426 fix.go:206] guest clock: 1696536252.314816178
	I1005 20:04:12.335585  204426 fix.go:219] Guest: 2023-10-05 20:04:12.314816178 +0000 UTC Remote: 2023-10-05 20:04:12.213286099 +0000 UTC m=+26.504117552 (delta=101.530079ms)
	I1005 20:04:12.335634  204426 fix.go:190] guest clock delta is within tolerance: 101.530079ms
	I1005 20:04:12.335640  204426 start.go:83] releasing machines lock for "addons-127532", held for 26.510263359s
	I1005 20:04:12.335667  204426 main.go:141] libmachine: (addons-127532) Calling .DriverName
	I1005 20:04:12.336031  204426 main.go:141] libmachine: (addons-127532) Calling .GetIP
	I1005 20:04:12.339157  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:04:12.339517  204426 main.go:141] libmachine: (addons-127532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:f8:fe", ip: ""} in network mk-addons-127532: {Iface:virbr1 ExpiryTime:2023-10-05 21:04:03 +0000 UTC Type:0 Mac:52:54:00:e0:f8:fe Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-127532 Clientid:01:52:54:00:e0:f8:fe}
	I1005 20:04:12.339563  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined IP address 192.168.39.191 and MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:04:12.339717  204426 main.go:141] libmachine: (addons-127532) Calling .DriverName
	I1005 20:04:12.340298  204426 main.go:141] libmachine: (addons-127532) Calling .DriverName
	I1005 20:04:12.340648  204426 main.go:141] libmachine: (addons-127532) Calling .DriverName
	I1005 20:04:12.340764  204426 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1005 20:04:12.340817  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHHostname
	I1005 20:04:12.340918  204426 ssh_runner.go:195] Run: cat /version.json
	I1005 20:04:12.340950  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHHostname
	I1005 20:04:12.343718  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:04:12.343912  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:04:12.344106  204426 main.go:141] libmachine: (addons-127532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:f8:fe", ip: ""} in network mk-addons-127532: {Iface:virbr1 ExpiryTime:2023-10-05 21:04:03 +0000 UTC Type:0 Mac:52:54:00:e0:f8:fe Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-127532 Clientid:01:52:54:00:e0:f8:fe}
	I1005 20:04:12.344150  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined IP address 192.168.39.191 and MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:04:12.344261  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHPort
	I1005 20:04:12.344383  204426 main.go:141] libmachine: (addons-127532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:f8:fe", ip: ""} in network mk-addons-127532: {Iface:virbr1 ExpiryTime:2023-10-05 21:04:03 +0000 UTC Type:0 Mac:52:54:00:e0:f8:fe Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-127532 Clientid:01:52:54:00:e0:f8:fe}
	I1005 20:04:12.344411  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined IP address 192.168.39.191 and MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:04:12.344488  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHKeyPath
	I1005 20:04:12.344688  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHUsername
	I1005 20:04:12.344708  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHPort
	I1005 20:04:12.344903  204426 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17363-196818/.minikube/machines/addons-127532/id_rsa Username:docker}
	I1005 20:04:12.344917  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHKeyPath
	I1005 20:04:12.345085  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHUsername
	I1005 20:04:12.345208  204426 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17363-196818/.minikube/machines/addons-127532/id_rsa Username:docker}
	I1005 20:04:12.428211  204426 ssh_runner.go:195] Run: systemctl --version
	I1005 20:04:12.459461  204426 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1005 20:04:12.465811  204426 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1005 20:04:12.465887  204426 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1005 20:04:12.482722  204426 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1005 20:04:12.482754  204426 start.go:469] detecting cgroup driver to use...
	I1005 20:04:12.482850  204426 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1005 20:04:12.515684  204426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1005 20:04:12.529014  204426 docker.go:197] disabling cri-docker service (if available) ...
	I1005 20:04:12.529084  204426 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1005 20:04:12.543307  204426 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1005 20:04:12.557484  204426 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1005 20:04:12.671336  204426 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1005 20:04:12.794337  204426 docker.go:213] disabling docker service ...
	I1005 20:04:12.794427  204426 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1005 20:04:12.810102  204426 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1005 20:04:12.822782  204426 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1005 20:04:12.930520  204426 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1005 20:04:13.042608  204426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1005 20:04:13.055792  204426 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1005 20:04:13.074257  204426 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1005 20:04:13.083938  204426 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1005 20:04:13.093522  204426 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1005 20:04:13.093607  204426 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1005 20:04:13.103119  204426 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1005 20:04:13.113001  204426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1005 20:04:13.122487  204426 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1005 20:04:13.131862  204426 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1005 20:04:13.141887  204426 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1005 20:04:13.151893  204426 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1005 20:04:13.161090  204426 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1005 20:04:13.161174  204426 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1005 20:04:13.173824  204426 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1005 20:04:13.184149  204426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1005 20:04:13.307336  204426 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1005 20:04:13.338524  204426 start.go:516] Will wait 60s for socket path /run/containerd/containerd.sock
	I1005 20:04:13.338611  204426 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1005 20:04:13.344756  204426 retry.go:31] will retry after 1.499116372s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1005 20:04:14.845440  204426 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1005 20:04:14.850825  204426 start.go:537] Will wait 60s for crictl version
	I1005 20:04:14.850902  204426 ssh_runner.go:195] Run: which crictl
	I1005 20:04:14.854738  204426 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1005 20:04:14.894728  204426 start.go:553] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.6
	RuntimeApiVersion:  v1
	I1005 20:04:14.894810  204426 ssh_runner.go:195] Run: containerd --version
	I1005 20:04:14.929462  204426 ssh_runner.go:195] Run: containerd --version
	I1005 20:04:14.962227  204426 out.go:177] * Preparing Kubernetes v1.28.2 on containerd 1.7.6 ...
	I1005 20:04:14.963881  204426 main.go:141] libmachine: (addons-127532) Calling .GetIP
	I1005 20:04:14.966543  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:04:14.966996  204426 main.go:141] libmachine: (addons-127532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:f8:fe", ip: ""} in network mk-addons-127532: {Iface:virbr1 ExpiryTime:2023-10-05 21:04:03 +0000 UTC Type:0 Mac:52:54:00:e0:f8:fe Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-127532 Clientid:01:52:54:00:e0:f8:fe}
	I1005 20:04:14.967032  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined IP address 192.168.39.191 and MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:04:14.967139  204426 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1005 20:04:14.971115  204426 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1005 20:04:14.983587  204426 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime containerd
	I1005 20:04:14.983644  204426 ssh_runner.go:195] Run: sudo crictl images --output json
	I1005 20:04:15.023164  204426 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.2". assuming images are not preloaded.
	I1005 20:04:15.023234  204426 ssh_runner.go:195] Run: which lz4
	I1005 20:04:15.027174  204426 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1005 20:04:15.031362  204426 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1005 20:04:15.031393  204426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-196818/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (456662433 bytes)
	I1005 20:04:16.844683  204426 containerd.go:547] Took 1.817543 seconds to copy over tarball
	I1005 20:04:16.844799  204426 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1005 20:04:19.712012  204426 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.867169165s)
	I1005 20:04:19.712055  204426 containerd.go:554] Took 2.867340 seconds to extract the tarball
	I1005 20:04:19.712070  204426 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1005 20:04:19.754834  204426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1005 20:04:19.861667  204426 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1005 20:04:19.886901  204426 ssh_runner.go:195] Run: sudo crictl images --output json
	I1005 20:04:19.936965  204426 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.2 registry.k8s.io/kube-controller-manager:v1.28.2 registry.k8s.io/kube-scheduler:v1.28.2 registry.k8s.io/kube-proxy:v1.28.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1005 20:04:19.937058  204426 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.2
	I1005 20:04:19.937087  204426 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1005 20:04:19.937123  204426 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.2
	I1005 20:04:19.937150  204426 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.2
	I1005 20:04:19.937207  204426 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.2
	I1005 20:04:19.937256  204426 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1005 20:04:19.937070  204426 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1005 20:04:19.937359  204426 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1005 20:04:19.938364  204426 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.2
	I1005 20:04:19.938410  204426 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.2
	I1005 20:04:19.938419  204426 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.2
	I1005 20:04:19.938419  204426 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.2
	I1005 20:04:19.938369  204426 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1005 20:04:19.938366  204426 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1005 20:04:19.938451  204426 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1005 20:04:19.938508  204426 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1005 20:04:20.135735  204426 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/pause:3.9"
	I1005 20:04:20.142500  204426 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/coredns/coredns:v1.10.1"
	I1005 20:04:20.143038  204426 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/etcd:3.5.9-0"
	I1005 20:04:20.143813  204426 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-controller-manager:v1.28.2"
	I1005 20:04:20.169162  204426 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-scheduler:v1.28.2"
	I1005 20:04:20.169231  204426 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-apiserver:v1.28.2"
	I1005 20:04:20.170394  204426 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-proxy:v1.28.2"
	I1005 20:04:20.194770  204426 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I1005 20:04:21.041860  204426 cache_images.go:116] "registry.k8s.io/pause:3.9" needs transfer: "registry.k8s.io/pause:3.9" does not exist at hash "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c" in container runtime
	I1005 20:04:21.041916  204426 cri.go:218] Removing image: registry.k8s.io/pause:3.9
	I1005 20:04:21.041978  204426 ssh_runner.go:195] Run: which crictl
	I1005 20:04:21.082468  204426 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I1005 20:04:21.082519  204426 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I1005 20:04:21.082561  204426 ssh_runner.go:195] Run: which crictl
	I1005 20:04:21.511499  204426 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/etcd:3.5.9-0": (1.368406255s)
	I1005 20:04:21.511564  204426 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I1005 20:04:21.511575  204426 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-controller-manager:v1.28.2": (1.367724578s)
	I1005 20:04:21.511600  204426 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I1005 20:04:21.511612  204426 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.2" does not exist at hash "55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57" in container runtime
	I1005 20:04:21.511638  204426 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.2
	I1005 20:04:21.511650  204426 ssh_runner.go:195] Run: which crictl
	I1005 20:04:21.511673  204426 ssh_runner.go:195] Run: which crictl
	I1005 20:04:21.738251  204426 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-scheduler:v1.28.2": (1.569037514s)
	I1005 20:04:21.738309  204426 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.2" does not exist at hash "7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8" in container runtime
	I1005 20:04:21.738356  204426 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.2
	I1005 20:04:21.738369  204426 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-apiserver:v1.28.2": (1.569113695s)
	I1005 20:04:21.738414  204426 ssh_runner.go:195] Run: which crictl
	I1005 20:04:21.738421  204426 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.2" does not exist at hash "cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce" in container runtime
	I1005 20:04:21.738454  204426 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.2
	I1005 20:04:21.738501  204426 ssh_runner.go:195] Run: which crictl
	I1005 20:04:21.750534  204426 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-proxy:v1.28.2": (1.580099197s)
	I1005 20:04:21.750564  204426 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5": (1.555751229s)
	I1005 20:04:21.750585  204426 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.2" needs transfer: "registry.k8s.io/kube-proxy:v1.28.2" does not exist at hash "c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0" in container runtime
	I1005 20:04:21.750600  204426 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1005 20:04:21.750624  204426 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.2
	I1005 20:04:21.750637  204426 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1005 20:04:21.750682  204426 ssh_runner.go:195] Run: which crictl
	I1005 20:04:21.750693  204426 ssh_runner.go:195] Run: which crictl
	I1005 20:04:21.750736  204426 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.9
	I1005 20:04:21.750756  204426 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I1005 20:04:21.750780  204426 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I1005 20:04:21.750796  204426 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.2
	I1005 20:04:21.753413  204426 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.2
	I1005 20:04:21.754862  204426 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.2
	I1005 20:04:22.064432  204426 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1005 20:04:22.064446  204426 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-196818/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9
	I1005 20:04:22.064436  204426 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.2
	I1005 20:04:22.064547  204426 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-196818/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I1005 20:04:22.084381  204426 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-196818/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I1005 20:04:22.084443  204426 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-196818/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.2
	I1005 20:04:22.084466  204426 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-196818/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.2
	I1005 20:04:22.084503  204426 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-196818/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.2
	I1005 20:04:22.190344  204426 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-196818/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.2
	I1005 20:04:22.190473  204426 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-196818/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1005 20:04:22.190520  204426 cache_images.go:92] LoadImages completed in 2.253530339s
	W1005 20:04:22.190620  204426 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17363-196818/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9: no such file or directory
	I1005 20:04:22.190690  204426 ssh_runner.go:195] Run: sudo crictl info
	I1005 20:04:22.230976  204426 cni.go:84] Creating CNI manager for ""
	I1005 20:04:22.230999  204426 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1005 20:04:22.231023  204426 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1005 20:04:22.231044  204426 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.191 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-127532 NodeName:addons-127532 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.191"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.191 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1005 20:04:22.231161  204426 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.191
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-127532"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.191
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.191"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1005 20:04:22.231227  204426 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-127532 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.191
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:addons-127532 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1005 20:04:22.231275  204426 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1005 20:04:22.241161  204426 binaries.go:44] Found k8s binaries, skipping transfer
	I1005 20:04:22.241235  204426 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1005 20:04:22.250922  204426 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I1005 20:04:22.267286  204426 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1005 20:04:22.283631  204426 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2108 bytes)
	I1005 20:04:22.301957  204426 ssh_runner.go:195] Run: grep 192.168.39.191	control-plane.minikube.internal$ /etc/hosts
	I1005 20:04:22.305775  204426 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.191	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1005 20:04:22.318092  204426 certs.go:56] Setting up /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532 for IP: 192.168.39.191
	I1005 20:04:22.318138  204426 certs.go:190] acquiring lock for shared ca certs: {Name:mk1ff4c65d9e2efc24c4e0a0f6abee42686061fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:04:22.318287  204426 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17363-196818/.minikube/ca.key
	I1005 20:04:22.443814  204426 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-196818/.minikube/ca.crt ...
	I1005 20:04:22.443850  204426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-196818/.minikube/ca.crt: {Name:mkc1dcb8648a9ba37ebcf6ce5bf372b42c8ac251 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:04:22.444070  204426 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-196818/.minikube/ca.key ...
	I1005 20:04:22.444086  204426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-196818/.minikube/ca.key: {Name:mk060251c40c3c74e7cfe48bdc32966a25683317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:04:22.444191  204426 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17363-196818/.minikube/proxy-client-ca.key
	I1005 20:04:22.724441  204426 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-196818/.minikube/proxy-client-ca.crt ...
	I1005 20:04:22.724475  204426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-196818/.minikube/proxy-client-ca.crt: {Name:mk595ad5234e4ced66f0dc32eb2c207d69c2e90f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:04:22.724687  204426 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-196818/.minikube/proxy-client-ca.key ...
	I1005 20:04:22.724703  204426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-196818/.minikube/proxy-client-ca.key: {Name:mk3c7f29b76a45de9605593f657b013c19aaf937 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:04:22.724853  204426 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/client.key
	I1005 20:04:22.724880  204426 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/client.crt with IP's: []
	I1005 20:04:22.836919  204426 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/client.crt ...
	I1005 20:04:22.836962  204426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/client.crt: {Name:mkd24e5431288c3481fbf8814fce4172d87f9b18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:04:22.837184  204426 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/client.key ...
	I1005 20:04:22.837210  204426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/client.key: {Name:mk3f40ff393160e347ace7641fcb298297fbf2aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:04:22.837325  204426 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/apiserver.key.6f081b7d
	I1005 20:04:22.837353  204426 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/apiserver.crt.6f081b7d with IP's: [192.168.39.191 10.96.0.1 127.0.0.1 10.0.0.1]
	I1005 20:04:22.938174  204426 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/apiserver.crt.6f081b7d ...
	I1005 20:04:22.938223  204426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/apiserver.crt.6f081b7d: {Name:mk27c5fb06b489c0bc51b99abfe3d37d4d74f6e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:04:22.938424  204426 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/apiserver.key.6f081b7d ...
	I1005 20:04:22.938445  204426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/apiserver.key.6f081b7d: {Name:mk5f217093b085a17b1a28c004b283e71fd41911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:04:22.938541  204426 certs.go:337] copying /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/apiserver.crt.6f081b7d -> /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/apiserver.crt
	I1005 20:04:22.938631  204426 certs.go:341] copying /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/apiserver.key.6f081b7d -> /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/apiserver.key
	I1005 20:04:22.938681  204426 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/proxy-client.key
	I1005 20:04:22.938699  204426 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/proxy-client.crt with IP's: []
	I1005 20:04:23.103455  204426 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/proxy-client.crt ...
	I1005 20:04:23.103495  204426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/proxy-client.crt: {Name:mk5aae0e50a3967d93a91752fe9b60b56a68c042 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:04:23.103702  204426 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/proxy-client.key ...
	I1005 20:04:23.103723  204426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/proxy-client.key: {Name:mkc61b23c2747165f4f7ece5c7f683bea6d3ab64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:04:23.103959  204426 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-196818/.minikube/certs/home/jenkins/minikube-integration/17363-196818/.minikube/certs/ca-key.pem (1675 bytes)
	I1005 20:04:23.104002  204426 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-196818/.minikube/certs/home/jenkins/minikube-integration/17363-196818/.minikube/certs/ca.pem (1082 bytes)
	I1005 20:04:23.104025  204426 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-196818/.minikube/certs/home/jenkins/minikube-integration/17363-196818/.minikube/certs/cert.pem (1123 bytes)
	I1005 20:04:23.104057  204426 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-196818/.minikube/certs/home/jenkins/minikube-integration/17363-196818/.minikube/certs/key.pem (1675 bytes)
	I1005 20:04:23.104794  204426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1005 20:04:23.129824  204426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1005 20:04:23.154269  204426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1005 20:04:23.178563  204426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1005 20:04:23.204242  204426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-196818/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1005 20:04:23.232694  204426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-196818/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1005 20:04:23.260667  204426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-196818/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1005 20:04:23.290357  204426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-196818/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1005 20:04:23.317528  204426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-196818/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1005 20:04:23.345826  204426 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1005 20:04:23.367319  204426 ssh_runner.go:195] Run: openssl version
	I1005 20:04:23.373800  204426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1005 20:04:23.387104  204426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1005 20:04:23.393035  204426 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  5 20:04 /usr/share/ca-certificates/minikubeCA.pem
	I1005 20:04:23.393107  204426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1005 20:04:23.400620  204426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1005 20:04:23.414418  204426 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1005 20:04:23.419821  204426 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1005 20:04:23.419923  204426 kubeadm.go:404] StartCluster: {Name:addons-127532 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.2 ClusterName:addons-127532 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.191 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 20:04:23.420052  204426 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1005 20:04:23.420114  204426 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1005 20:04:23.469038  204426 cri.go:89] found id: ""
	I1005 20:04:23.469131  204426 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1005 20:04:23.481830  204426 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1005 20:04:23.493860  204426 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1005 20:04:23.505719  204426 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1005 20:04:23.505811  204426 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1005 20:04:23.731477  204426 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1005 20:04:51.249261  204426 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1005 20:04:51.249343  204426 kubeadm.go:322] [preflight] Running pre-flight checks
	I1005 20:04:51.249427  204426 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1005 20:04:51.249575  204426 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1005 20:04:51.249712  204426 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1005 20:04:51.249815  204426 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1005 20:04:51.252246  204426 out.go:204]   - Generating certificates and keys ...
	I1005 20:04:51.252395  204426 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1005 20:04:51.252505  204426 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1005 20:04:51.252609  204426 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1005 20:04:51.252693  204426 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1005 20:04:51.252781  204426 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1005 20:04:51.252852  204426 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1005 20:04:51.252927  204426 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1005 20:04:51.253065  204426 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-127532 localhost] and IPs [192.168.39.191 127.0.0.1 ::1]
	I1005 20:04:51.253148  204426 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1005 20:04:51.253304  204426 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-127532 localhost] and IPs [192.168.39.191 127.0.0.1 ::1]
	I1005 20:04:51.253389  204426 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1005 20:04:51.253473  204426 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1005 20:04:51.253541  204426 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1005 20:04:51.253632  204426 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1005 20:04:51.253704  204426 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1005 20:04:51.253766  204426 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1005 20:04:51.253855  204426 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1005 20:04:51.253929  204426 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1005 20:04:51.254032  204426 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1005 20:04:51.254123  204426 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1005 20:04:51.256291  204426 out.go:204]   - Booting up control plane ...
	I1005 20:04:51.256431  204426 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1005 20:04:51.256531  204426 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1005 20:04:51.256635  204426 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1005 20:04:51.256773  204426 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1005 20:04:51.256909  204426 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1005 20:04:51.256969  204426 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1005 20:04:51.257191  204426 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1005 20:04:51.257286  204426 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.005903 seconds
	I1005 20:04:51.257388  204426 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1005 20:04:51.257564  204426 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1005 20:04:51.257649  204426 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1005 20:04:51.257805  204426 kubeadm.go:322] [mark-control-plane] Marking the node addons-127532 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1005 20:04:51.257854  204426 kubeadm.go:322] [bootstrap-token] Using token: 7sgj79.lmdpy2k1zy5yycjc
	I1005 20:04:51.260326  204426 out.go:204]   - Configuring RBAC rules ...
	I1005 20:04:51.260498  204426 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1005 20:04:51.260571  204426 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1005 20:04:51.260719  204426 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1005 20:04:51.260859  204426 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1005 20:04:51.261010  204426 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1005 20:04:51.261129  204426 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1005 20:04:51.261272  204426 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1005 20:04:51.261339  204426 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1005 20:04:51.261395  204426 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1005 20:04:51.261534  204426 kubeadm.go:322] 
	I1005 20:04:51.261633  204426 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1005 20:04:51.261711  204426 kubeadm.go:322] 
	I1005 20:04:51.261821  204426 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1005 20:04:51.261826  204426 kubeadm.go:322] 
	I1005 20:04:51.261847  204426 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1005 20:04:51.261901  204426 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1005 20:04:51.261963  204426 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1005 20:04:51.261976  204426 kubeadm.go:322] 
	I1005 20:04:51.262045  204426 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1005 20:04:51.262061  204426 kubeadm.go:322] 
	I1005 20:04:51.262187  204426 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1005 20:04:51.262222  204426 kubeadm.go:322] 
	I1005 20:04:51.262290  204426 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1005 20:04:51.262400  204426 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1005 20:04:51.262497  204426 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1005 20:04:51.262508  204426 kubeadm.go:322] 
	I1005 20:04:51.262613  204426 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1005 20:04:51.262710  204426 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1005 20:04:51.262727  204426 kubeadm.go:322] 
	I1005 20:04:51.262833  204426 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 7sgj79.lmdpy2k1zy5yycjc \
	I1005 20:04:51.262960  204426 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6a89ee9372875bc7d66e64902bb71a3ff5903295d5c971e8f04d250bdf1f3a67 \
	I1005 20:04:51.262996  204426 kubeadm.go:322] 	--control-plane 
	I1005 20:04:51.263006  204426 kubeadm.go:322] 
	I1005 20:04:51.263107  204426 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1005 20:04:51.263121  204426 kubeadm.go:322] 
	I1005 20:04:51.263212  204426 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 7sgj79.lmdpy2k1zy5yycjc \
	I1005 20:04:51.263360  204426 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6a89ee9372875bc7d66e64902bb71a3ff5903295d5c971e8f04d250bdf1f3a67 
	I1005 20:04:51.263395  204426 cni.go:84] Creating CNI manager for ""
	I1005 20:04:51.263407  204426 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1005 20:04:51.265737  204426 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1005 20:04:51.267503  204426 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1005 20:04:51.280775  204426 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1005 20:04:51.336332  204426 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1005 20:04:51.336426  204426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:04:51.336509  204426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=300d55cee86053f5b4c7a654fc8e7b9d3c030d53 minikube.k8s.io/name=addons-127532 minikube.k8s.io/updated_at=2023_10_05T20_04_51_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:04:51.597468  204426 ops.go:34] apiserver oom_adj: -16
	I1005 20:04:51.597540  204426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:04:51.764564  204426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:04:52.362753  204426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:04:52.862913  204426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:04:53.362706  204426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:04:53.862909  204426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:04:54.363045  204426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:04:54.863153  204426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:04:55.362896  204426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:04:55.863004  204426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:04:56.362819  204426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:04:56.862499  204426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:04:57.362223  204426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:04:57.862831  204426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:04:58.362867  204426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:04:58.862101  204426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:04:59.362861  204426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:04:59.863063  204426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:05:00.362895  204426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:05:00.862397  204426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:05:01.362140  204426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:05:01.862392  204426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:05:02.362230  204426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:05:02.862776  204426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:05:03.362461  204426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:05:03.862586  204426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 20:05:03.965734  204426 kubeadm.go:1081] duration metric: took 12.629401365s to wait for elevateKubeSystemPrivileges.
	I1005 20:05:03.965782  204426 kubeadm.go:406] StartCluster complete in 40.54586587s
	I1005 20:05:03.965915  204426 settings.go:142] acquiring lock: {Name:mkfed9f409387a04b7721cc92d2a97be346ee0bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:05:03.966112  204426 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17363-196818/kubeconfig
	I1005 20:05:03.966619  204426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-196818/kubeconfig: {Name:mkf3fdf56c0c99e0324159bdaf803d7c0d073271 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 20:05:03.966858  204426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1005 20:05:03.967037  204426 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1005 20:05:03.967148  204426 addons.go:69] Setting volumesnapshots=true in profile "addons-127532"
	I1005 20:05:03.967159  204426 addons.go:69] Setting ingress=true in profile "addons-127532"
	I1005 20:05:03.967171  204426 addons.go:69] Setting default-storageclass=true in profile "addons-127532"
	I1005 20:05:03.967182  204426 config.go:182] Loaded profile config "addons-127532": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
	I1005 20:05:03.967197  204426 addons.go:69] Setting metrics-server=true in profile "addons-127532"
	I1005 20:05:03.967198  204426 addons.go:69] Setting gcp-auth=true in profile "addons-127532"
	I1005 20:05:03.967207  204426 addons.go:231] Setting addon metrics-server=true in "addons-127532"
	I1005 20:05:03.967178  204426 addons.go:69] Setting ingress-dns=true in profile "addons-127532"
	I1005 20:05:03.967220  204426 mustload.go:65] Loading cluster: addons-127532
	I1005 20:05:03.967230  204426 addons.go:231] Setting addon ingress-dns=true in "addons-127532"
	I1005 20:05:03.967240  204426 addons.go:69] Setting cloud-spanner=true in profile "addons-127532"
	I1005 20:05:03.967258  204426 addons.go:231] Setting addon cloud-spanner=true in "addons-127532"
	I1005 20:05:03.967210  204426 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-127532"
	I1005 20:05:03.967304  204426 host.go:66] Checking if "addons-127532" exists ...
	I1005 20:05:03.967313  204426 host.go:66] Checking if "addons-127532" exists ...
	I1005 20:05:03.967388  204426 config.go:182] Loaded profile config "addons-127532": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
	I1005 20:05:03.967620  204426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:05:03.967643  204426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:05:03.967645  204426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:05:03.967671  204426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:05:03.967703  204426 addons.go:69] Setting registry=true in profile "addons-127532"
	I1005 20:05:03.967727  204426 addons.go:231] Setting addon registry=true in "addons-127532"
	I1005 20:05:03.967729  204426 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-127532"
	I1005 20:05:03.967732  204426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:05:03.967737  204426 addons.go:69] Setting storage-provisioner=true in profile "addons-127532"
	I1005 20:05:03.967748  204426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:05:03.967190  204426 addons.go:231] Setting addon ingress=true in "addons-127532"
	I1005 20:05:03.967760  204426 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-127532"
	I1005 20:05:03.967262  204426 host.go:66] Checking if "addons-127532" exists ...
	I1005 20:05:03.967187  204426 addons.go:69] Setting inspektor-gadget=true in profile "addons-127532"
	I1005 20:05:03.967771  204426 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-127532"
	I1005 20:05:03.967773  204426 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-127532"
	I1005 20:05:03.967750  204426 addons.go:69] Setting helm-tiller=true in profile "addons-127532"
	I1005 20:05:03.967802  204426 addons.go:231] Setting addon helm-tiller=true in "addons-127532"
	I1005 20:05:03.967841  204426 host.go:66] Checking if "addons-127532" exists ...
	I1005 20:05:03.967894  204426 host.go:66] Checking if "addons-127532" exists ...
	I1005 20:05:03.968018  204426 host.go:66] Checking if "addons-127532" exists ...
	I1005 20:05:03.967167  204426 addons.go:231] Setting addon volumesnapshots=true in "addons-127532"
	I1005 20:05:03.968177  204426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:05:03.968185  204426 host.go:66] Checking if "addons-127532" exists ...
	I1005 20:05:03.968203  204426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:05:03.968243  204426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:05:03.968255  204426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:05:03.968281  204426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:05:03.968337  204426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:05:03.967782  204426 addons.go:231] Setting addon inspektor-gadget=true in "addons-127532"
	I1005 20:05:03.967750  204426 addons.go:231] Setting addon storage-provisioner=true in "addons-127532"
	I1005 20:05:03.968565  204426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:05:03.968582  204426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:05:03.968616  204426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:05:03.968615  204426 host.go:66] Checking if "addons-127532" exists ...
	I1005 20:05:03.968667  204426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:05:03.968681  204426 host.go:66] Checking if "addons-127532" exists ...
	I1005 20:05:03.968737  204426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:05:03.968782  204426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:05:03.968619  204426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:05:03.968999  204426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:05:03.969246  204426 host.go:66] Checking if "addons-127532" exists ...
	I1005 20:05:03.992383  204426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:05:03.992461  204426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:05:03.992950  204426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:05:03.992993  204426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:05:03.992995  204426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:05:03.993033  204426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:05:03.993428  204426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40053
	I1005 20:05:03.993730  204426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34459
	I1005 20:05:03.994374  204426 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:05:03.995266  204426 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:05:03.995428  204426 main.go:141] libmachine: Using API Version  1
	I1005 20:05:03.995444  204426 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:05:03.995867  204426 main.go:141] libmachine: Using API Version  1
	I1005 20:05:03.995896  204426 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:05:03.995968  204426 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:05:03.996590  204426 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:05:03.997591  204426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:05:03.997641  204426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:05:04.018633  204426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:05:04.018711  204426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:05:04.027231  204426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36217
	I1005 20:05:04.027460  204426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33075
	I1005 20:05:04.027614  204426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40805
	I1005 20:05:04.027882  204426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37247
	I1005 20:05:04.028518  204426 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:05:04.028623  204426 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:05:04.029043  204426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38713
	I1005 20:05:04.029169  204426 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:05:04.029304  204426 main.go:141] libmachine: Using API Version  1
	I1005 20:05:04.029323  204426 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:05:04.029818  204426 main.go:141] libmachine: Using API Version  1
	I1005 20:05:04.029838  204426 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:05:04.029921  204426 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:05:04.029993  204426 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:05:04.030020  204426 main.go:141] libmachine: Using API Version  1
	I1005 20:05:04.030040  204426 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:05:04.030525  204426 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:05:04.030586  204426 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:05:04.030943  204426 main.go:141] libmachine: (addons-127532) Calling .GetState
	I1005 20:05:04.031292  204426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:05:04.031342  204426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:05:04.031647  204426 main.go:141] libmachine: Using API Version  1
	I1005 20:05:04.031670  204426 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:05:04.032092  204426 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:05:04.032661  204426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:05:04.032692  204426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:05:04.033535  204426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:05:04.033570  204426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:05:04.034056  204426 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:05:04.035429  204426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43703
	I1005 20:05:04.035576  204426 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-127532"
	I1005 20:05:04.035861  204426 host.go:66] Checking if "addons-127532" exists ...
	I1005 20:05:04.036300  204426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:05:04.036355  204426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:05:04.036760  204426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44571
	I1005 20:05:04.037472  204426 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:05:04.038279  204426 main.go:141] libmachine: Using API Version  1
	I1005 20:05:04.038301  204426 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:05:04.038842  204426 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:05:04.039649  204426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:05:04.039722  204426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:05:04.040184  204426 main.go:141] libmachine: Using API Version  1
	I1005 20:05:04.040209  204426 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:05:04.041930  204426 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:05:04.043207  204426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36339
	I1005 20:05:04.043895  204426 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:05:04.045038  204426 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:05:04.045924  204426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:05:04.045971  204426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:05:04.046257  204426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45493
	I1005 20:05:04.046882  204426 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:05:04.047642  204426 main.go:141] libmachine: Using API Version  1
	I1005 20:05:04.047664  204426 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:05:04.048167  204426 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:05:04.048240  204426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35469
	I1005 20:05:04.048992  204426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:05:04.049045  204426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:05:04.049565  204426 main.go:141] libmachine: Using API Version  1
	I1005 20:05:04.049595  204426 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:05:04.050065  204426 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:05:04.050169  204426 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:05:04.050980  204426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:05:04.051030  204426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:05:04.051444  204426 main.go:141] libmachine: Using API Version  1
	I1005 20:05:04.051467  204426 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:05:04.051561  204426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39349
	I1005 20:05:04.052068  204426 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:05:04.052276  204426 main.go:141] libmachine: Using API Version  1
	I1005 20:05:04.052291  204426 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:05:04.052364  204426 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:05:04.052964  204426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:05:04.053011  204426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:05:04.053291  204426 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:05:04.053315  204426 main.go:141] libmachine: Using API Version  1
	I1005 20:05:04.053328  204426 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:05:04.053548  204426 main.go:141] libmachine: (addons-127532) Calling .GetState
	I1005 20:05:04.053724  204426 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:05:04.053930  204426 main.go:141] libmachine: (addons-127532) Calling .GetState
	I1005 20:05:04.056907  204426 addons.go:231] Setting addon default-storageclass=true in "addons-127532"
	I1005 20:05:04.056959  204426 host.go:66] Checking if "addons-127532" exists ...
	I1005 20:05:04.057391  204426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:05:04.057421  204426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:05:04.057663  204426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37385
	I1005 20:05:04.058474  204426 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:05:04.059360  204426 main.go:141] libmachine: Using API Version  1
	I1005 20:05:04.059390  204426 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:05:04.059892  204426 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:05:04.060160  204426 main.go:141] libmachine: (addons-127532) Calling .GetState
	I1005 20:05:04.062246  204426 host.go:66] Checking if "addons-127532" exists ...
	I1005 20:05:04.062694  204426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:05:04.062723  204426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:05:04.063292  204426 main.go:141] libmachine: (addons-127532) Calling .DriverName
	I1005 20:05:04.070518  204426 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1005 20:05:04.071854  204426 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1005 20:05:04.071895  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1005 20:05:04.071932  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHHostname
	I1005 20:05:04.073124  204426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38269
	I1005 20:05:04.074036  204426 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:05:04.075166  204426 main.go:141] libmachine: Using API Version  1
	I1005 20:05:04.075202  204426 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:05:04.075786  204426 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:05:04.076068  204426 main.go:141] libmachine: (addons-127532) Calling .GetState
	I1005 20:05:04.077972  204426 main.go:141] libmachine: (addons-127532) Calling .DriverName
	I1005 20:05:04.082259  204426 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1005 20:05:04.078638  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:05:04.079288  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHPort
	I1005 20:05:04.083880  204426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45529
	I1005 20:05:04.084180  204426 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1005 20:05:04.084197  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1005 20:05:04.084203  204426 main.go:141] libmachine: (addons-127532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:f8:fe", ip: ""} in network mk-addons-127532: {Iface:virbr1 ExpiryTime:2023-10-05 21:04:03 +0000 UTC Type:0 Mac:52:54:00:e0:f8:fe Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-127532 Clientid:01:52:54:00:e0:f8:fe}
	I1005 20:05:04.084209  204426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38429
	I1005 20:05:04.084219  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHHostname
	I1005 20:05:04.084232  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined IP address 192.168.39.191 and MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:05:04.084418  204426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39755
	I1005 20:05:04.084604  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHKeyPath
	I1005 20:05:04.085022  204426 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:05:04.085145  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHUsername
	I1005 20:05:04.085215  204426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42277
	I1005 20:05:04.085483  204426 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17363-196818/.minikube/machines/addons-127532/id_rsa Username:docker}
	I1005 20:05:04.085716  204426 main.go:141] libmachine: Using API Version  1
	I1005 20:05:04.085731  204426 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:05:04.085795  204426 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:05:04.086049  204426 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:05:04.086271  204426 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:05:04.086294  204426 main.go:141] libmachine: Using API Version  1
	I1005 20:05:04.086312  204426 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:05:04.086596  204426 main.go:141] libmachine: Using API Version  1
	I1005 20:05:04.086617  204426 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:05:04.086691  204426 main.go:141] libmachine: (addons-127532) Calling .GetState
	I1005 20:05:04.086852  204426 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:05:04.087003  204426 main.go:141] libmachine: (addons-127532) Calling .GetState
	I1005 20:05:04.087586  204426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41473
	I1005 20:05:04.087678  204426 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:05:04.087755  204426 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:05:04.087962  204426 main.go:141] libmachine: (addons-127532) Calling .GetState
	I1005 20:05:04.088066  204426 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:05:04.088738  204426 main.go:141] libmachine: Using API Version  1
	I1005 20:05:04.088763  204426 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:05:04.089326  204426 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:05:04.089638  204426 main.go:141] libmachine: (addons-127532) Calling .GetState
	I1005 20:05:04.089863  204426 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-127532" context rescaled to 1 replicas
	I1005 20:05:04.089897  204426 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.191 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1005 20:05:04.091741  204426 out.go:177] * Verifying Kubernetes components...
	I1005 20:05:04.090825  204426 main.go:141] libmachine: (addons-127532) Calling .DriverName
	I1005 20:05:04.090857  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:05:04.090906  204426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46487
	I1005 20:05:04.091127  204426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36645
	I1005 20:05:04.091668  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHPort
	I1005 20:05:04.092659  204426 main.go:141] libmachine: (addons-127532) Calling .DriverName
	I1005 20:05:04.093041  204426 main.go:141] libmachine: (addons-127532) Calling .DriverName
	I1005 20:05:04.093388  204426 main.go:141] libmachine: Using API Version  1
	I1005 20:05:04.093409  204426 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:05:04.093493  204426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 20:05:04.093654  204426 main.go:141] libmachine: (addons-127532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:f8:fe", ip: ""} in network mk-addons-127532: {Iface:virbr1 ExpiryTime:2023-10-05 21:04:03 +0000 UTC Type:0 Mac:52:54:00:e0:f8:fe Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-127532 Clientid:01:52:54:00:e0:f8:fe}
	I1005 20:05:04.093689  204426 main.go:141] libmachine: (addons-127532) Calling .DriverName
	I1005 20:05:04.095201  204426 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.10
	I1005 20:05:04.093750  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined IP address 192.168.39.191 and MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:05:04.093930  204426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41667
	I1005 20:05:04.094397  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHKeyPath
	I1005 20:05:04.094428  204426 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:05:04.094449  204426 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:05:04.094462  204426 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:05:04.095129  204426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44889
	I1005 20:05:04.097897  204426 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1005 20:05:04.097989  204426 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1005 20:05:04.098482  204426 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:05:04.099325  204426 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1005 20:05:04.099632  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHUsername
	I1005 20:05:04.100002  204426 main.go:141] libmachine: Using API Version  1
	I1005 20:05:04.100919  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1005 20:05:04.100895  204426 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:05:04.102385  204426 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1005 20:05:04.102430  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHHostname
	I1005 20:05:04.101538  204426 main.go:141] libmachine: (addons-127532) Calling .GetState
	I1005 20:05:04.101626  204426 main.go:141] libmachine: Using API Version  1
	I1005 20:05:04.101643  204426 main.go:141] libmachine: Using API Version  1
	I1005 20:05:04.101868  204426 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17363-196818/.minikube/machines/addons-127532/id_rsa Username:docker}
	I1005 20:05:04.102043  204426 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:05:04.102085  204426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33615
	I1005 20:05:04.100993  204426 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1005 20:05:04.103060  204426 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:05:04.104028  204426 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1005 20:05:04.104056  204426 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:05:04.104220  204426 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:05:04.104836  204426 main.go:141] libmachine: Using API Version  1
	I1005 20:05:04.104848  204426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44165
	I1005 20:05:04.105527  204426 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:05:04.106149  204426 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:05:04.106736  204426 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:05:04.106740  204426 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:05:04.106782  204426 main.go:141] libmachine: (addons-127532) Calling .DriverName
	I1005 20:05:04.107240  204426 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:05:04.107354  204426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:05:04.107887  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:05:04.107921  204426 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1005 20:05:04.108338  204426 main.go:141] libmachine: Using API Version  1
	I1005 20:05:04.109183  204426 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:05:04.110619  204426 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1005 20:05:04.109307  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1005 20:05:04.108731  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHPort
	I1005 20:05:04.108352  204426 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:05:04.109313  204426 main.go:141] libmachine: (addons-127532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:f8:fe", ip: ""} in network mk-addons-127532: {Iface:virbr1 ExpiryTime:2023-10-05 21:04:03 +0000 UTC Type:0 Mac:52:54:00:e0:f8:fe Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-127532 Clientid:01:52:54:00:e0:f8:fe}
	I1005 20:05:04.109318  204426 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.1
	I1005 20:05:04.109325  204426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:05:04.109515  204426 main.go:141] libmachine: (addons-127532) Calling .GetState
	I1005 20:05:04.109710  204426 main.go:141] libmachine: (addons-127532) Calling .GetState
	I1005 20:05:04.109995  204426 main.go:141] libmachine: Using API Version  1
	I1005 20:05:04.110044  204426 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:05:04.112136  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHHostname
	I1005 20:05:04.112173  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined IP address 192.168.39.191 and MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:05:04.112421  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHKeyPath
	I1005 20:05:04.113476  204426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:05:04.113605  204426 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1005 20:05:04.113640  204426 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:05:04.114346  204426 main.go:141] libmachine: (addons-127532) Calling .DriverName
	I1005 20:05:04.114435  204426 main.go:141] libmachine: (addons-127532) Calling .GetState
	I1005 20:05:04.118354  204426 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1005 20:05:04.116626  204426 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1005 20:05:04.116771  204426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:05:04.117039  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHUsername
	I1005 20:05:04.117691  204426 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:05:04.117751  204426 main.go:141] libmachine: (addons-127532) Calling .DriverName
	I1005 20:05:04.119114  204426 main.go:141] libmachine: (addons-127532) Calling .DriverName
	I1005 20:05:04.119942  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:05:04.120255  204426 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1005 20:05:04.120680  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHPort
	I1005 20:05:04.122080  204426 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1005 20:05:04.122519  204426 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17363-196818/.minikube/machines/addons-127532/id_rsa Username:docker}
	I1005 20:05:04.123688  204426 out.go:177]   - Using image docker.io/registry:2.8.1
	I1005 20:05:04.123844  204426 main.go:141] libmachine: (addons-127532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:f8:fe", ip: ""} in network mk-addons-127532: {Iface:virbr1 ExpiryTime:2023-10-05 21:04:03 +0000 UTC Type:0 Mac:52:54:00:e0:f8:fe Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-127532 Clientid:01:52:54:00:e0:f8:fe}
	I1005 20:05:04.123870  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I1005 20:05:04.124117  204426 main.go:141] libmachine: (addons-127532) Calling .DriverName
	I1005 20:05:04.126685  204426 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1005 20:05:04.125209  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHHostname
	I1005 20:05:04.125232  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined IP address 192.168.39.191 and MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:05:04.125264  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1005 20:05:04.125510  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHKeyPath
	I1005 20:05:04.128586  204426 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.21.0
	I1005 20:05:04.128666  204426 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1005 20:05:04.128689  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1005 20:05:04.130416  204426 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1005 20:05:04.128728  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHHostname
	I1005 20:05:04.128608  204426 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1005 20:05:04.128744  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHHostname
	I1005 20:05:04.129083  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHUsername
	I1005 20:05:04.132759  204426 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1005 20:05:04.132812  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1005 20:05:04.132842  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHHostname
	I1005 20:05:04.132887  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:05:04.134822  204426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32783
	I1005 20:05:04.133030  204426 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1005 20:05:04.133694  204426 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17363-196818/.minikube/machines/addons-127532/id_rsa Username:docker}
	I1005 20:05:04.133752  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHPort
	I1005 20:05:04.133792  204426 main.go:141] libmachine: (addons-127532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:f8:fe", ip: ""} in network mk-addons-127532: {Iface:virbr1 ExpiryTime:2023-10-05 21:04:03 +0000 UTC Type:0 Mac:52:54:00:e0:f8:fe Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-127532 Clientid:01:52:54:00:e0:f8:fe}
	I1005 20:05:04.135892  204426 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:05:04.136937  204426 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1005 20:05:04.137020  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1005 20:05:04.137045  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined IP address 192.168.39.191 and MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:05:04.137371  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHKeyPath
	I1005 20:05:04.137550  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:05:04.138015  204426 main.go:141] libmachine: Using API Version  1
	I1005 20:05:04.138589  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHHostname
	I1005 20:05:04.138593  204426 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:05:04.138684  204426 main.go:141] libmachine: (addons-127532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:f8:fe", ip: ""} in network mk-addons-127532: {Iface:virbr1 ExpiryTime:2023-10-05 21:04:03 +0000 UTC Type:0 Mac:52:54:00:e0:f8:fe Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-127532 Clientid:01:52:54:00:e0:f8:fe}
	I1005 20:05:04.138186  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:05:04.138553  204426 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1005 20:05:04.140639  204426 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:05:04.140688  204426 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1005 20:05:04.138715  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined IP address 192.168.39.191 and MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:05:04.140708  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1005 20:05:04.138162  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHPort
	I1005 20:05:04.140733  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHHostname
	I1005 20:05:04.138882  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHUsername
	I1005 20:05:04.138917  204426 main.go:141] libmachine: (addons-127532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:f8:fe", ip: ""} in network mk-addons-127532: {Iface:virbr1 ExpiryTime:2023-10-05 21:04:03 +0000 UTC Type:0 Mac:52:54:00:e0:f8:fe Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-127532 Clientid:01:52:54:00:e0:f8:fe}
	I1005 20:05:04.140863  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined IP address 192.168.39.191 and MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:05:04.139161  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHPort
	I1005 20:05:04.139566  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:05:04.140951  204426 main.go:141] libmachine: (addons-127532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:f8:fe", ip: ""} in network mk-addons-127532: {Iface:virbr1 ExpiryTime:2023-10-05 21:04:03 +0000 UTC Type:0 Mac:52:54:00:e0:f8:fe Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-127532 Clientid:01:52:54:00:e0:f8:fe}
	I1005 20:05:04.140981  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined IP address 192.168.39.191 and MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:05:04.140326  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHPort
	I1005 20:05:04.141199  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHKeyPath
	I1005 20:05:04.141334  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHKeyPath
	I1005 20:05:04.141426  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHUsername
	I1005 20:05:04.141483  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHUsername
	I1005 20:05:04.141501  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHKeyPath
	I1005 20:05:04.141572  204426 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17363-196818/.minikube/machines/addons-127532/id_rsa Username:docker}
	I1005 20:05:04.141645  204426 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17363-196818/.minikube/machines/addons-127532/id_rsa Username:docker}
	I1005 20:05:04.141871  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHUsername
	I1005 20:05:04.142190  204426 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17363-196818/.minikube/machines/addons-127532/id_rsa Username:docker}
	I1005 20:05:04.142215  204426 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17363-196818/.minikube/machines/addons-127532/id_rsa Username:docker}
	I1005 20:05:04.142644  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:05:04.142917  204426 main.go:141] libmachine: (addons-127532) Calling .GetState
	I1005 20:05:04.143135  204426 main.go:141] libmachine: (addons-127532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:f8:fe", ip: ""} in network mk-addons-127532: {Iface:virbr1 ExpiryTime:2023-10-05 21:04:03 +0000 UTC Type:0 Mac:52:54:00:e0:f8:fe Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-127532 Clientid:01:52:54:00:e0:f8:fe}
	I1005 20:05:04.143166  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined IP address 192.168.39.191 and MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:05:04.143443  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHPort
	I1005 20:05:04.143623  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHKeyPath
	I1005 20:05:04.143766  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHUsername
	I1005 20:05:04.143910  204426 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17363-196818/.minikube/machines/addons-127532/id_rsa Username:docker}
	I1005 20:05:04.145785  204426 main.go:141] libmachine: (addons-127532) Calling .DriverName
	I1005 20:05:04.148539  204426 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1005 20:05:04.147014  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:05:04.147841  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHPort
	I1005 20:05:04.149587  204426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45781
	I1005 20:05:04.151514  204426 main.go:141] libmachine: (addons-127532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:f8:fe", ip: ""} in network mk-addons-127532: {Iface:virbr1 ExpiryTime:2023-10-05 21:04:03 +0000 UTC Type:0 Mac:52:54:00:e0:f8:fe Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-127532 Clientid:01:52:54:00:e0:f8:fe}
	I1005 20:05:04.151556  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined IP address 192.168.39.191 and MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:05:04.153834  204426 out.go:177]   - Using image docker.io/busybox:stable
	I1005 20:05:04.151867  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHKeyPath
	I1005 20:05:04.152116  204426 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:05:04.155832  204426 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1005 20:05:04.155861  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1005 20:05:04.155896  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHHostname
	I1005 20:05:04.156039  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHUsername
	I1005 20:05:04.156269  204426 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17363-196818/.minikube/machines/addons-127532/id_rsa Username:docker}
	I1005 20:05:04.156384  204426 main.go:141] libmachine: Using API Version  1
	I1005 20:05:04.156402  204426 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:05:04.157019  204426 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:05:04.157550  204426 main.go:141] libmachine: (addons-127532) Calling .GetState
	I1005 20:05:04.159928  204426 main.go:141] libmachine: (addons-127532) Calling .DriverName
	I1005 20:05:04.159982  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:05:04.160307  204426 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1005 20:05:04.160325  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1005 20:05:04.160343  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHHostname
	I1005 20:05:04.160416  204426 main.go:141] libmachine: (addons-127532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:f8:fe", ip: ""} in network mk-addons-127532: {Iface:virbr1 ExpiryTime:2023-10-05 21:04:03 +0000 UTC Type:0 Mac:52:54:00:e0:f8:fe Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-127532 Clientid:01:52:54:00:e0:f8:fe}
	I1005 20:05:04.160452  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined IP address 192.168.39.191 and MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:05:04.160645  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHPort
	I1005 20:05:04.160826  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHKeyPath
	I1005 20:05:04.161072  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHUsername
	I1005 20:05:04.161365  204426 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17363-196818/.minikube/machines/addons-127532/id_rsa Username:docker}
	I1005 20:05:04.163748  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:05:04.164144  204426 main.go:141] libmachine: (addons-127532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:f8:fe", ip: ""} in network mk-addons-127532: {Iface:virbr1 ExpiryTime:2023-10-05 21:04:03 +0000 UTC Type:0 Mac:52:54:00:e0:f8:fe Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-127532 Clientid:01:52:54:00:e0:f8:fe}
	I1005 20:05:04.164175  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined IP address 192.168.39.191 and MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:05:04.164368  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHPort
	I1005 20:05:04.164585  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHKeyPath
	I1005 20:05:04.164758  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHUsername
	I1005 20:05:04.164851  204426 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17363-196818/.minikube/machines/addons-127532/id_rsa Username:docker}
	I1005 20:05:04.429324  204426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1005 20:05:04.559021  204426 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1005 20:05:04.559052  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1005 20:05:04.577425  204426 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1005 20:05:04.577454  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1005 20:05:04.608355  204426 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1005 20:05:04.608386  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1005 20:05:04.670407  204426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1005 20:05:04.671084  204426 node_ready.go:35] waiting up to 6m0s for node "addons-127532" to be "Ready" ...
	I1005 20:05:04.674920  204426 node_ready.go:49] node "addons-127532" has status "Ready":"True"
	I1005 20:05:04.674945  204426 node_ready.go:38] duration metric: took 3.835695ms waiting for node "addons-127532" to be "Ready" ...
	I1005 20:05:04.674956  204426 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1005 20:05:04.683248  204426 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ht467" in "kube-system" namespace to be "Ready" ...
	I1005 20:05:04.802674  204426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1005 20:05:04.882591  204426 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1005 20:05:04.882626  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1005 20:05:04.883597  204426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1005 20:05:04.891445  204426 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1005 20:05:04.891474  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1005 20:05:04.893718  204426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1005 20:05:04.903454  204426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1005 20:05:04.903474  204426 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1005 20:05:04.903495  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1005 20:05:04.908734  204426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1005 20:05:05.011605  204426 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1005 20:05:05.011637  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1005 20:05:05.138930  204426 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1005 20:05:05.138958  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1005 20:05:05.178217  204426 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1005 20:05:05.178254  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1005 20:05:05.345535  204426 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1005 20:05:05.345570  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1005 20:05:05.389348  204426 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1005 20:05:05.389385  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1005 20:05:05.612908  204426 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1005 20:05:05.612953  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1005 20:05:05.664367  204426 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1005 20:05:05.664402  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1005 20:05:05.716981  204426 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1005 20:05:05.717011  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1005 20:05:05.791695  204426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1005 20:05:05.795401  204426 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1005 20:05:05.795434  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1005 20:05:05.814072  204426 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1005 20:05:05.814113  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1005 20:05:05.967804  204426 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1005 20:05:05.967844  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1005 20:05:05.989739  204426 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1005 20:05:05.989772  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1005 20:05:06.013041  204426 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1005 20:05:06.013071  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1005 20:05:06.066616  204426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1005 20:05:06.078836  204426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1005 20:05:06.274010  204426 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1005 20:05:06.274047  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1005 20:05:06.329658  204426 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1005 20:05:06.329684  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1005 20:05:06.396933  204426 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1005 20:05:06.396966  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1005 20:05:06.608779  204426 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1005 20:05:06.608816  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1005 20:05:06.708045  204426 pod_ready.go:102] pod "coredns-5dd5756b68-ht467" in "kube-system" namespace has status "Ready":"False"
	I1005 20:05:06.708480  204426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1005 20:05:06.978510  204426 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1005 20:05:06.978548  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1005 20:05:07.113722  204426 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1005 20:05:07.113749  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1005 20:05:07.300936  204426 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1005 20:05:07.300970  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1005 20:05:07.330440  204426 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1005 20:05:07.330483  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1005 20:05:07.586576  204426 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1005 20:05:07.586612  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1005 20:05:07.657594  204426 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1005 20:05:07.657637  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1005 20:05:07.813067  204426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1005 20:05:07.827233  204426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1005 20:05:09.207298  204426 pod_ready.go:102] pod "coredns-5dd5756b68-ht467" in "kube-system" namespace has status "Ready":"False"
	I1005 20:05:09.694059  204426 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.023599918s)
	I1005 20:05:09.694104  204426 start.go:923] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1005 20:05:09.696834  204426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.267445165s)
	I1005 20:05:09.696944  204426 main.go:141] libmachine: Making call to close driver server
	I1005 20:05:09.696974  204426 main.go:141] libmachine: (addons-127532) Calling .Close
	I1005 20:05:09.697467  204426 main.go:141] libmachine: Successfully made call to close driver server
	I1005 20:05:09.697493  204426 main.go:141] libmachine: Making call to close connection to plugin binary
	I1005 20:05:09.697509  204426 main.go:141] libmachine: Making call to close driver server
	I1005 20:05:09.697521  204426 main.go:141] libmachine: (addons-127532) Calling .Close
	I1005 20:05:09.697891  204426 main.go:141] libmachine: Successfully made call to close driver server
	I1005 20:05:09.697913  204426 main.go:141] libmachine: Making call to close connection to plugin binary
	I1005 20:05:10.737148  204426 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1005 20:05:10.737221  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHHostname
	I1005 20:05:10.741867  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:05:10.742984  204426 main.go:141] libmachine: (addons-127532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:f8:fe", ip: ""} in network mk-addons-127532: {Iface:virbr1 ExpiryTime:2023-10-05 21:04:03 +0000 UTC Type:0 Mac:52:54:00:e0:f8:fe Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-127532 Clientid:01:52:54:00:e0:f8:fe}
	I1005 20:05:10.743048  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined IP address 192.168.39.191 and MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:05:10.743439  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHPort
	I1005 20:05:10.743837  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHKeyPath
	I1005 20:05:10.744069  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHUsername
	I1005 20:05:10.744562  204426 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17363-196818/.minikube/machines/addons-127532/id_rsa Username:docker}
	I1005 20:05:11.212199  204426 pod_ready.go:102] pod "coredns-5dd5756b68-ht467" in "kube-system" namespace has status "Ready":"False"
	I1005 20:05:11.834759  204426 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1005 20:05:12.256922  204426 addons.go:231] Setting addon gcp-auth=true in "addons-127532"
	I1005 20:05:12.256998  204426 host.go:66] Checking if "addons-127532" exists ...
	I1005 20:05:12.257435  204426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:05:12.257500  204426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:05:12.297838  204426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34397
	I1005 20:05:12.298484  204426 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:05:12.299249  204426 main.go:141] libmachine: Using API Version  1
	I1005 20:05:12.299293  204426 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:05:12.299816  204426 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:05:12.300680  204426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:05:12.300742  204426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:05:12.320896  204426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39333
	I1005 20:05:12.321441  204426 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:05:12.321977  204426 main.go:141] libmachine: Using API Version  1
	I1005 20:05:12.322000  204426 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:05:12.322415  204426 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:05:12.322736  204426 main.go:141] libmachine: (addons-127532) Calling .GetState
	I1005 20:05:12.325503  204426 main.go:141] libmachine: (addons-127532) Calling .DriverName
	I1005 20:05:12.326096  204426 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1005 20:05:12.326260  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHHostname
	I1005 20:05:12.332385  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:05:12.333155  204426 main.go:141] libmachine: (addons-127532) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:f8:fe", ip: ""} in network mk-addons-127532: {Iface:virbr1 ExpiryTime:2023-10-05 21:04:03 +0000 UTC Type:0 Mac:52:54:00:e0:f8:fe Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:addons-127532 Clientid:01:52:54:00:e0:f8:fe}
	I1005 20:05:12.333198  204426 main.go:141] libmachine: (addons-127532) DBG | domain addons-127532 has defined IP address 192.168.39.191 and MAC address 52:54:00:e0:f8:fe in network mk-addons-127532
	I1005 20:05:12.333516  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHPort
	I1005 20:05:12.333773  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHKeyPath
	I1005 20:05:12.334036  204426 main.go:141] libmachine: (addons-127532) Calling .GetSSHUsername
	I1005 20:05:12.334214  204426 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17363-196818/.minikube/machines/addons-127532/id_rsa Username:docker}
	I1005 20:05:13.796669  204426 pod_ready.go:102] pod "coredns-5dd5756b68-ht467" in "kube-system" namespace has status "Ready":"False"
	I1005 20:05:14.318021  204426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.515292646s)
	I1005 20:05:14.318092  204426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.434460996s)
	I1005 20:05:14.318102  204426 main.go:141] libmachine: Making call to close driver server
	I1005 20:05:14.318120  204426 main.go:141] libmachine: (addons-127532) Calling .Close
	I1005 20:05:14.318134  204426 main.go:141] libmachine: Making call to close driver server
	I1005 20:05:14.318154  204426 main.go:141] libmachine: (addons-127532) Calling .Close
	I1005 20:05:14.318211  204426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.424452914s)
	I1005 20:05:14.318288  204426 main.go:141] libmachine: Making call to close driver server
	I1005 20:05:14.318304  204426 main.go:141] libmachine: (addons-127532) Calling .Close
	I1005 20:05:14.318258  204426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.414768789s)
	I1005 20:05:14.318363  204426 main.go:141] libmachine: Making call to close driver server
	I1005 20:05:14.318383  204426 main.go:141] libmachine: (addons-127532) Calling .Close
	I1005 20:05:14.318747  204426 main.go:141] libmachine: (addons-127532) DBG | Closing plugin on server side
	I1005 20:05:14.318763  204426 main.go:141] libmachine: Successfully made call to close driver server
	I1005 20:05:14.318766  204426 main.go:141] libmachine: (addons-127532) DBG | Closing plugin on server side
	I1005 20:05:14.318777  204426 main.go:141] libmachine: Making call to close connection to plugin binary
	I1005 20:05:14.318792  204426 main.go:141] libmachine: (addons-127532) DBG | Closing plugin on server side
	I1005 20:05:14.318792  204426 main.go:141] libmachine: Successfully made call to close driver server
	I1005 20:05:14.318801  204426 main.go:141] libmachine: Making call to close driver server
	I1005 20:05:14.318805  204426 main.go:141] libmachine: Making call to close connection to plugin binary
	I1005 20:05:14.318810  204426 main.go:141] libmachine: (addons-127532) Calling .Close
	I1005 20:05:14.318816  204426 main.go:141] libmachine: Making call to close driver server
	I1005 20:05:14.318824  204426 main.go:141] libmachine: (addons-127532) Calling .Close
	I1005 20:05:14.318845  204426 main.go:141] libmachine: (addons-127532) DBG | Closing plugin on server side
	I1005 20:05:14.318881  204426 main.go:141] libmachine: Successfully made call to close driver server
	I1005 20:05:14.318916  204426 main.go:141] libmachine: Making call to close connection to plugin binary
	I1005 20:05:14.318933  204426 main.go:141] libmachine: Making call to close driver server
	I1005 20:05:14.318941  204426 main.go:141] libmachine: (addons-127532) Calling .Close
	I1005 20:05:14.319167  204426 main.go:141] libmachine: (addons-127532) DBG | Closing plugin on server side
	I1005 20:05:14.319189  204426 main.go:141] libmachine: Successfully made call to close driver server
	I1005 20:05:14.319193  204426 main.go:141] libmachine: Successfully made call to close driver server
	I1005 20:05:14.319204  204426 main.go:141] libmachine: Making call to close connection to plugin binary
	I1005 20:05:14.319205  204426 main.go:141] libmachine: Making call to close connection to plugin binary
	I1005 20:05:14.319206  204426 main.go:141] libmachine: (addons-127532) DBG | Closing plugin on server side
	I1005 20:05:14.320556  204426 main.go:141] libmachine: (addons-127532) DBG | Closing plugin on server side
	I1005 20:05:14.320614  204426 main.go:141] libmachine: Successfully made call to close driver server
	I1005 20:05:14.320623  204426 main.go:141] libmachine: Making call to close connection to plugin binary
	I1005 20:05:14.320875  204426 main.go:141] libmachine: Successfully made call to close driver server
	I1005 20:05:14.320892  204426 main.go:141] libmachine: Making call to close connection to plugin binary
	I1005 20:05:14.320899  204426 main.go:141] libmachine: Making call to close driver server
	I1005 20:05:14.320906  204426 main.go:141] libmachine: (addons-127532) Calling .Close
	I1005 20:05:14.321194  204426 main.go:141] libmachine: (addons-127532) DBG | Closing plugin on server side
	I1005 20:05:14.321259  204426 main.go:141] libmachine: Successfully made call to close driver server
	I1005 20:05:14.321284  204426 main.go:141] libmachine: Making call to close connection to plugin binary
	I1005 20:05:14.344153  204426 main.go:141] libmachine: Making call to close driver server
	I1005 20:05:14.344184  204426 main.go:141] libmachine: (addons-127532) Calling .Close
	I1005 20:05:14.344509  204426 main.go:141] libmachine: (addons-127532) DBG | Closing plugin on server side
	I1005 20:05:14.344554  204426 main.go:141] libmachine: Successfully made call to close driver server
	I1005 20:05:14.344589  204426 main.go:141] libmachine: Making call to close connection to plugin binary
	W1005 20:05:14.344699  204426 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1005 20:05:14.351721  204426 main.go:141] libmachine: Making call to close driver server
	I1005 20:05:14.351746  204426 main.go:141] libmachine: (addons-127532) Calling .Close
	I1005 20:05:14.352149  204426 main.go:141] libmachine: Successfully made call to close driver server
	I1005 20:05:14.352172  204426 main.go:141] libmachine: Making call to close connection to plugin binary
	I1005 20:05:14.352172  204426 main.go:141] libmachine: (addons-127532) DBG | Closing plugin on server side
	I1005 20:05:16.142002  204426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (11.233212861s)
	I1005 20:05:16.142063  204426 main.go:141] libmachine: Making call to close driver server
	I1005 20:05:16.142079  204426 main.go:141] libmachine: (addons-127532) Calling .Close
	I1005 20:05:16.142005  204426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.35026667s)
	I1005 20:05:16.142106  204426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.075446635s)
	I1005 20:05:16.142139  204426 main.go:141] libmachine: Making call to close driver server
	I1005 20:05:16.142141  204426 main.go:141] libmachine: Making call to close driver server
	I1005 20:05:16.142155  204426 main.go:141] libmachine: (addons-127532) Calling .Close
	I1005 20:05:16.142162  204426 main.go:141] libmachine: (addons-127532) Calling .Close
	I1005 20:05:16.142203  204426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (10.063315682s)
	I1005 20:05:16.142234  204426 main.go:141] libmachine: Making call to close driver server
	I1005 20:05:16.142244  204426 main.go:141] libmachine: (addons-127532) Calling .Close
	I1005 20:05:16.142376  204426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.433861213s)
	W1005 20:05:16.142411  204426 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1005 20:05:16.142436  204426 retry.go:31] will retry after 285.388232ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1005 20:05:16.142692  204426 main.go:141] libmachine: (addons-127532) DBG | Closing plugin on server side
	I1005 20:05:16.142708  204426 main.go:141] libmachine: Successfully made call to close driver server
	I1005 20:05:16.142754  204426 main.go:141] libmachine: Making call to close connection to plugin binary
	I1005 20:05:16.142773  204426 main.go:141] libmachine: Making call to close driver server
	I1005 20:05:16.142791  204426 main.go:141] libmachine: (addons-127532) Calling .Close
	I1005 20:05:16.142803  204426 main.go:141] libmachine: Successfully made call to close driver server
	I1005 20:05:16.142822  204426 main.go:141] libmachine: Making call to close connection to plugin binary
	I1005 20:05:16.142836  204426 main.go:141] libmachine: Making call to close driver server
	I1005 20:05:16.142844  204426 main.go:141] libmachine: (addons-127532) Calling .Close
	I1005 20:05:16.142777  204426 main.go:141] libmachine: (addons-127532) DBG | Closing plugin on server side
	I1005 20:05:16.142755  204426 main.go:141] libmachine: Successfully made call to close driver server
	I1005 20:05:16.142903  204426 main.go:141] libmachine: Making call to close connection to plugin binary
	I1005 20:05:16.142915  204426 main.go:141] libmachine: Making call to close driver server
	I1005 20:05:16.142928  204426 main.go:141] libmachine: (addons-127532) Calling .Close
	I1005 20:05:16.142945  204426 main.go:141] libmachine: (addons-127532) DBG | Closing plugin on server side
	I1005 20:05:16.142983  204426 main.go:141] libmachine: Successfully made call to close driver server
	I1005 20:05:16.143228  204426 main.go:141] libmachine: Making call to close connection to plugin binary
	I1005 20:05:16.143240  204426 main.go:141] libmachine: Making call to close driver server
	I1005 20:05:16.143248  204426 main.go:141] libmachine: (addons-127532) Calling .Close
	I1005 20:05:16.143551  204426 main.go:141] libmachine: (addons-127532) DBG | Closing plugin on server side
	I1005 20:05:16.142733  204426 main.go:141] libmachine: (addons-127532) DBG | Closing plugin on server side
	I1005 20:05:16.143652  204426 main.go:141] libmachine: Successfully made call to close driver server
	I1005 20:05:16.143669  204426 main.go:141] libmachine: Making call to close connection to plugin binary
	I1005 20:05:16.143680  204426 addons.go:467] Verifying addon ingress=true in "addons-127532"
	I1005 20:05:16.148071  204426 out.go:177] * Verifying ingress addon...
	I1005 20:05:16.144399  204426 main.go:141] libmachine: (addons-127532) DBG | Closing plugin on server side
	I1005 20:05:16.143158  204426 main.go:141] libmachine: (addons-127532) DBG | Closing plugin on server side
	I1005 20:05:16.144522  204426 main.go:141] libmachine: Successfully made call to close driver server
	I1005 20:05:16.144547  204426 main.go:141] libmachine: (addons-127532) DBG | Closing plugin on server side
	I1005 20:05:16.144569  204426 main.go:141] libmachine: Successfully made call to close driver server
	I1005 20:05:16.143134  204426 main.go:141] libmachine: Successfully made call to close driver server
	I1005 20:05:16.149570  204426 main.go:141] libmachine: Making call to close connection to plugin binary
	I1005 20:05:16.149619  204426 main.go:141] libmachine: Making call to close connection to plugin binary
	I1005 20:05:16.149639  204426 addons.go:467] Verifying addon metrics-server=true in "addons-127532"
	I1005 20:05:16.149660  204426 main.go:141] libmachine: Making call to close connection to plugin binary
	I1005 20:05:16.149684  204426 addons.go:467] Verifying addon registry=true in "addons-127532"
	I1005 20:05:16.151239  204426 out.go:177] * Verifying registry addon...
	I1005 20:05:16.150703  204426 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1005 20:05:16.153939  204426 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1005 20:05:16.158111  204426 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1005 20:05:16.158146  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:16.162702  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:16.163749  204426 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1005 20:05:16.163775  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:16.168724  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:16.209705  204426 pod_ready.go:102] pod "coredns-5dd5756b68-ht467" in "kube-system" namespace has status "Ready":"False"
	I1005 20:05:16.428696  204426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1005 20:05:16.671038  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:16.676832  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:17.167818  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:17.178391  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:17.689673  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:17.690072  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:18.173038  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:18.192232  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:18.225057  204426 pod_ready.go:102] pod "coredns-5dd5756b68-ht467" in "kube-system" namespace has status "Ready":"False"
	I1005 20:05:18.681290  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:18.695223  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:19.153962  204426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (11.340824718s)
	I1005 20:05:19.154046  204426 main.go:141] libmachine: Making call to close driver server
	I1005 20:05:19.154064  204426 main.go:141] libmachine: (addons-127532) Calling .Close
	I1005 20:05:19.154084  204426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (11.326796178s)
	I1005 20:05:19.154144  204426 main.go:141] libmachine: Making call to close driver server
	I1005 20:05:19.154164  204426 main.go:141] libmachine: (addons-127532) Calling .Close
	I1005 20:05:19.154238  204426 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (6.828111171s)
	I1005 20:05:19.156807  204426 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1005 20:05:19.154490  204426 main.go:141] libmachine: (addons-127532) DBG | Closing plugin on server side
	I1005 20:05:19.154490  204426 main.go:141] libmachine: (addons-127532) DBG | Closing plugin on server side
	I1005 20:05:19.154503  204426 main.go:141] libmachine: Successfully made call to close driver server
	I1005 20:05:19.154518  204426 main.go:141] libmachine: Successfully made call to close driver server
	I1005 20:05:19.158831  204426 main.go:141] libmachine: Making call to close connection to plugin binary
	I1005 20:05:19.158849  204426 main.go:141] libmachine: Making call to close connection to plugin binary
	I1005 20:05:19.160795  204426 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1005 20:05:19.158888  204426 main.go:141] libmachine: Making call to close driver server
	I1005 20:05:19.158899  204426 main.go:141] libmachine: Making call to close driver server
	I1005 20:05:19.162414  204426 main.go:141] libmachine: (addons-127532) Calling .Close
	I1005 20:05:19.162521  204426 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1005 20:05:19.162538  204426 main.go:141] libmachine: (addons-127532) Calling .Close
	I1005 20:05:19.162546  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1005 20:05:19.162802  204426 main.go:141] libmachine: (addons-127532) DBG | Closing plugin on server side
	I1005 20:05:19.162847  204426 main.go:141] libmachine: (addons-127532) DBG | Closing plugin on server side
	I1005 20:05:19.162876  204426 main.go:141] libmachine: Successfully made call to close driver server
	I1005 20:05:19.162881  204426 main.go:141] libmachine: Successfully made call to close driver server
	I1005 20:05:19.162893  204426 main.go:141] libmachine: Making call to close connection to plugin binary
	I1005 20:05:19.162900  204426 main.go:141] libmachine: Making call to close connection to plugin binary
	I1005 20:05:19.162915  204426 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-127532"
	I1005 20:05:19.165010  204426 out.go:177] * Verifying csi-hostpath-driver addon...
	I1005 20:05:19.167697  204426 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1005 20:05:19.194366  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:19.234518  204426 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1005 20:05:19.234552  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:19.246392  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:19.321384  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:19.357059  204426 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1005 20:05:19.357092  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1005 20:05:19.488854  204426 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1005 20:05:19.488886  204426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I1005 20:05:19.669015  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:19.677773  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:19.734285  204426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1005 20:05:19.845032  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:20.168783  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:20.173833  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:20.312793  204426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.884021041s)
	I1005 20:05:20.312915  204426 main.go:141] libmachine: Making call to close driver server
	I1005 20:05:20.312935  204426 main.go:141] libmachine: (addons-127532) Calling .Close
	I1005 20:05:20.313292  204426 main.go:141] libmachine: (addons-127532) DBG | Closing plugin on server side
	I1005 20:05:20.313345  204426 main.go:141] libmachine: Successfully made call to close driver server
	I1005 20:05:20.313356  204426 main.go:141] libmachine: Making call to close connection to plugin binary
	I1005 20:05:20.313377  204426 main.go:141] libmachine: Making call to close driver server
	I1005 20:05:20.313388  204426 main.go:141] libmachine: (addons-127532) Calling .Close
	I1005 20:05:20.313702  204426 main.go:141] libmachine: Successfully made call to close driver server
	I1005 20:05:20.313727  204426 main.go:141] libmachine: Making call to close connection to plugin binary
	I1005 20:05:20.313705  204426 main.go:141] libmachine: (addons-127532) DBG | Closing plugin on server side
	I1005 20:05:20.330444  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:20.673768  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:20.683712  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:20.712089  204426 pod_ready.go:102] pod "coredns-5dd5756b68-ht467" in "kube-system" namespace has status "Ready":"False"
	I1005 20:05:20.830125  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:21.168566  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:21.176917  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:21.327334  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:21.667738  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:21.675365  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:21.846940  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:22.153673  204426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.4193338s)
	I1005 20:05:22.153754  204426 main.go:141] libmachine: Making call to close driver server
	I1005 20:05:22.153775  204426 main.go:141] libmachine: (addons-127532) Calling .Close
	I1005 20:05:22.154157  204426 main.go:141] libmachine: Successfully made call to close driver server
	I1005 20:05:22.154184  204426 main.go:141] libmachine: Making call to close connection to plugin binary
	I1005 20:05:22.154213  204426 main.go:141] libmachine: Making call to close driver server
	I1005 20:05:22.154224  204426 main.go:141] libmachine: (addons-127532) Calling .Close
	I1005 20:05:22.154534  204426 main.go:141] libmachine: Successfully made call to close driver server
	I1005 20:05:22.154550  204426 main.go:141] libmachine: Making call to close connection to plugin binary
	I1005 20:05:22.156535  204426 addons.go:467] Verifying addon gcp-auth=true in "addons-127532"
	I1005 20:05:22.158968  204426 out.go:177] * Verifying gcp-auth addon...
	I1005 20:05:22.162126  204426 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1005 20:05:22.176477  204426 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1005 20:05:22.176509  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:22.189150  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:22.189319  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:22.199447  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:22.332430  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:22.670180  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:22.676110  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:22.705687  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:22.830688  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:23.167932  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:23.177458  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:23.207682  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:23.212706  204426 pod_ready.go:102] pod "coredns-5dd5756b68-ht467" in "kube-system" namespace has status "Ready":"False"
	I1005 20:05:23.327470  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:23.677072  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:23.686635  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:23.705578  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:23.831826  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:24.167974  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:24.175484  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:24.205821  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:24.329930  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:24.670368  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:24.678232  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:24.707573  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:24.829354  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:25.168396  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:25.175162  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:25.205080  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:25.329622  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:25.668264  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:25.675531  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:25.704333  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:25.707605  204426 pod_ready.go:102] pod "coredns-5dd5756b68-ht467" in "kube-system" namespace has status "Ready":"False"
	I1005 20:05:25.828282  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:26.169074  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:26.177981  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:26.203854  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:26.328638  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:26.667785  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:26.674626  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:26.712854  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:26.827405  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:27.167825  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:27.179427  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:27.204175  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:27.328340  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:27.668820  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:27.674378  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:27.704318  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:27.832102  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:28.169766  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:28.176305  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:28.209129  204426 pod_ready.go:102] pod "coredns-5dd5756b68-ht467" in "kube-system" namespace has status "Ready":"False"
	I1005 20:05:28.213354  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:28.328538  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:28.668332  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:28.674653  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:28.708237  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:28.828489  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:29.168791  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:29.174122  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:29.205210  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:29.328333  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:29.669491  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:29.674953  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:29.705982  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:29.828865  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:30.169616  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:30.173649  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:30.205393  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:30.328431  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:30.668508  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:30.679462  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:30.704647  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:30.715306  204426 pod_ready.go:102] pod "coredns-5dd5756b68-ht467" in "kube-system" namespace has status "Ready":"False"
	I1005 20:05:30.833308  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:31.169067  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:31.174763  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:31.206611  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:31.330935  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:31.668674  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:31.674623  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:31.704094  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:31.828478  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:32.168816  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:32.176528  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:32.204408  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:32.334123  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:32.678891  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:32.679186  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:32.704435  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:32.830284  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:33.168293  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:33.173935  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:33.205016  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:33.208161  204426 pod_ready.go:102] pod "coredns-5dd5756b68-ht467" in "kube-system" namespace has status "Ready":"False"
	I1005 20:05:33.327673  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:33.682499  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:33.686167  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:33.708491  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:33.828189  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:34.169498  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:34.175229  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:34.205061  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:34.328407  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:34.668134  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:34.675018  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:34.705414  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:34.829085  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:35.169426  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:35.174066  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:35.205241  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:35.209157  204426 pod_ready.go:102] pod "coredns-5dd5756b68-ht467" in "kube-system" namespace has status "Ready":"False"
	I1005 20:05:35.329426  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:35.681266  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:35.681775  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:35.710045  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:35.828472  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:36.169335  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:36.174481  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:36.204883  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:36.328388  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:36.669598  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:36.677719  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:36.706795  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:36.828092  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:37.168153  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:37.174669  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:37.204835  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:37.330281  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:37.678419  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:37.678596  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:37.704041  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:37.707418  204426 pod_ready.go:102] pod "coredns-5dd5756b68-ht467" in "kube-system" namespace has status "Ready":"False"
	I1005 20:05:37.828368  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:38.168887  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:38.176000  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:38.204558  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:38.327785  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:38.668606  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:38.675221  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:38.706976  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:38.828209  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:39.169488  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:39.174581  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:39.203765  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:39.330322  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:39.680793  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:39.700341  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:39.705717  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:39.709421  204426 pod_ready.go:102] pod "coredns-5dd5756b68-ht467" in "kube-system" namespace has status "Ready":"False"
	I1005 20:05:39.830899  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:40.172406  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:40.177845  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:40.207830  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:40.329874  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:40.669485  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:40.675218  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:40.715633  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:40.828820  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:41.169372  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:41.178335  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:41.203867  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:41.328263  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:41.668695  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:41.674054  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:41.705076  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:41.827665  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:42.167944  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:42.177403  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:42.205734  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:42.208717  204426 pod_ready.go:102] pod "coredns-5dd5756b68-ht467" in "kube-system" namespace has status "Ready":"False"
	I1005 20:05:42.329596  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:42.670189  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:42.673630  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:42.711308  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:42.830351  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:43.169377  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:43.174911  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:43.208404  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:43.330360  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:43.669151  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:43.675207  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:43.704466  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:43.828351  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:44.168491  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:44.175932  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:44.204030  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:44.209845  204426 pod_ready.go:102] pod "coredns-5dd5756b68-ht467" in "kube-system" namespace has status "Ready":"False"
	I1005 20:05:44.330374  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:44.686424  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:44.691747  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:44.716471  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:44.721609  204426 pod_ready.go:92] pod "coredns-5dd5756b68-ht467" in "kube-system" namespace has status "Ready":"True"
	I1005 20:05:44.721637  204426 pod_ready.go:81] duration metric: took 40.038354827s waiting for pod "coredns-5dd5756b68-ht467" in "kube-system" namespace to be "Ready" ...
	I1005 20:05:44.721648  204426 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-r2hj8" in "kube-system" namespace to be "Ready" ...
	I1005 20:05:44.726637  204426 pod_ready.go:97] error getting pod "coredns-5dd5756b68-r2hj8" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-r2hj8" not found
	I1005 20:05:44.726665  204426 pod_ready.go:81] duration metric: took 5.010377ms waiting for pod "coredns-5dd5756b68-r2hj8" in "kube-system" namespace to be "Ready" ...
	E1005 20:05:44.726676  204426 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-r2hj8" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-r2hj8" not found
	I1005 20:05:44.726682  204426 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-127532" in "kube-system" namespace to be "Ready" ...
	I1005 20:05:44.733287  204426 pod_ready.go:92] pod "etcd-addons-127532" in "kube-system" namespace has status "Ready":"True"
	I1005 20:05:44.733314  204426 pod_ready.go:81] duration metric: took 6.626242ms waiting for pod "etcd-addons-127532" in "kube-system" namespace to be "Ready" ...
	I1005 20:05:44.733325  204426 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-127532" in "kube-system" namespace to be "Ready" ...
	I1005 20:05:44.740406  204426 pod_ready.go:92] pod "kube-apiserver-addons-127532" in "kube-system" namespace has status "Ready":"True"
	I1005 20:05:44.740433  204426 pod_ready.go:81] duration metric: took 7.101003ms waiting for pod "kube-apiserver-addons-127532" in "kube-system" namespace to be "Ready" ...
	I1005 20:05:44.740444  204426 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-127532" in "kube-system" namespace to be "Ready" ...
	I1005 20:05:44.750057  204426 pod_ready.go:92] pod "kube-controller-manager-addons-127532" in "kube-system" namespace has status "Ready":"True"
	I1005 20:05:44.750083  204426 pod_ready.go:81] duration metric: took 9.631876ms waiting for pod "kube-controller-manager-addons-127532" in "kube-system" namespace to be "Ready" ...
	I1005 20:05:44.750095  204426 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8zmq5" in "kube-system" namespace to be "Ready" ...
	I1005 20:05:44.827717  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:44.905283  204426 pod_ready.go:92] pod "kube-proxy-8zmq5" in "kube-system" namespace has status "Ready":"True"
	I1005 20:05:44.905318  204426 pod_ready.go:81] duration metric: took 155.213582ms waiting for pod "kube-proxy-8zmq5" in "kube-system" namespace to be "Ready" ...
	I1005 20:05:44.905333  204426 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-127532" in "kube-system" namespace to be "Ready" ...
	I1005 20:05:45.169209  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:45.176517  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:45.203914  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:45.303219  204426 pod_ready.go:92] pod "kube-scheduler-addons-127532" in "kube-system" namespace has status "Ready":"True"
	I1005 20:05:45.303248  204426 pod_ready.go:81] duration metric: took 397.906094ms waiting for pod "kube-scheduler-addons-127532" in "kube-system" namespace to be "Ready" ...
	I1005 20:05:45.303258  204426 pod_ready.go:38] duration metric: took 40.628293259s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1005 20:05:45.303279  204426 api_server.go:52] waiting for apiserver process to appear ...
	I1005 20:05:45.303331  204426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1005 20:05:45.320653  204426 api_server.go:72] duration metric: took 41.230713749s to wait for apiserver process to appear ...
	I1005 20:05:45.320689  204426 api_server.go:88] waiting for apiserver healthz status ...
	I1005 20:05:45.320740  204426 api_server.go:253] Checking apiserver healthz at https://192.168.39.191:8443/healthz ...
	I1005 20:05:45.325839  204426 api_server.go:279] https://192.168.39.191:8443/healthz returned 200:
	ok
	I1005 20:05:45.327550  204426 api_server.go:141] control plane version: v1.28.2
	I1005 20:05:45.327582  204426 api_server.go:131] duration metric: took 6.884314ms to wait for apiserver health ...
	I1005 20:05:45.327594  204426 system_pods.go:43] waiting for kube-system pods to appear ...
	I1005 20:05:45.334269  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:45.510161  204426 system_pods.go:59] 17 kube-system pods found
	I1005 20:05:45.510205  204426 system_pods.go:61] "coredns-5dd5756b68-ht467" [c9498ecd-0800-404c-8b78-62fe1f82d706] Running
	I1005 20:05:45.510212  204426 system_pods.go:61] "csi-hostpath-attacher-0" [21033633-7360-476a-bc84-7ba12dbc0c91] Running
	I1005 20:05:45.510216  204426 system_pods.go:61] "csi-hostpath-resizer-0" [081b6ec6-9a37-4fb4-aa3b-61bceebf5079] Running
	I1005 20:05:45.510233  204426 system_pods.go:61] "csi-hostpathplugin-xbqj8" [e0a331a6-3039-49e4-bc46-7f0bbfa14605] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1005 20:05:45.510242  204426 system_pods.go:61] "etcd-addons-127532" [cb00c17f-4be4-4fe7-9dea-328dff5d3fee] Running
	I1005 20:05:45.510249  204426 system_pods.go:61] "kube-apiserver-addons-127532" [9fa739ec-3939-4add-a824-55a5f9d8555a] Running
	I1005 20:05:45.510257  204426 system_pods.go:61] "kube-controller-manager-addons-127532" [c24ecfc0-1fe1-4843-9111-f24dc1b02a70] Running
	I1005 20:05:45.510268  204426 system_pods.go:61] "kube-ingress-dns-minikube" [410f45e6-cc5d-40f7-89f7-dc0d86c2f5c5] Running
	I1005 20:05:45.510278  204426 system_pods.go:61] "kube-proxy-8zmq5" [9c985012-8981-48a5-9788-0776c85ddc6c] Running
	I1005 20:05:45.510284  204426 system_pods.go:61] "kube-scheduler-addons-127532" [d3d61c1f-3088-483a-b070-5295e4aaca35] Running
	I1005 20:05:45.510288  204426 system_pods.go:61] "metrics-server-7c66d45ddc-p846v" [87d58e97-7364-4431-9e84-80f4247aa856] Running
	I1005 20:05:45.510292  204426 system_pods.go:61] "registry-88fwl" [59c90c75-e8df-40f9-9c0f-ef6b6e5f7c48] Running
	I1005 20:05:45.510302  204426 system_pods.go:61] "registry-proxy-rv8kw" [0faef84e-a6c6-4c66-972e-567277a6613b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1005 20:05:45.510309  204426 system_pods.go:61] "snapshot-controller-58dbcc7b99-6zzrx" [016395dd-3886-447c-85fc-f80131a6bcdf] Running
	I1005 20:05:45.510320  204426 system_pods.go:61] "snapshot-controller-58dbcc7b99-nzpgl" [8b98b53e-8ae6-4b67-ba00-7c7538206beb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1005 20:05:45.510328  204426 system_pods.go:61] "storage-provisioner" [556b5271-cb63-4b66-ba25-e71fe7e72237] Running
	I1005 20:05:45.510332  204426 system_pods.go:61] "tiller-deploy-7b677967b9-wpjtp" [a40dafcd-da9a-46c6-931f-65e21917673c] Running
	I1005 20:05:45.510340  204426 system_pods.go:74] duration metric: took 182.739279ms to wait for pod list to return data ...
	I1005 20:05:45.510351  204426 default_sa.go:34] waiting for default service account to be created ...
	I1005 20:05:45.678213  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:45.679838  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:45.704910  204426 default_sa.go:45] found service account: "default"
	I1005 20:05:45.704958  204426 default_sa.go:55] duration metric: took 194.599189ms for default service account to be created ...
	I1005 20:05:45.704971  204426 system_pods.go:116] waiting for k8s-apps to be running ...
	I1005 20:05:45.706349  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:45.849647  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:45.920864  204426 system_pods.go:86] 17 kube-system pods found
	I1005 20:05:45.920897  204426 system_pods.go:89] "coredns-5dd5756b68-ht467" [c9498ecd-0800-404c-8b78-62fe1f82d706] Running
	I1005 20:05:45.920904  204426 system_pods.go:89] "csi-hostpath-attacher-0" [21033633-7360-476a-bc84-7ba12dbc0c91] Running
	I1005 20:05:45.920909  204426 system_pods.go:89] "csi-hostpath-resizer-0" [081b6ec6-9a37-4fb4-aa3b-61bceebf5079] Running
	I1005 20:05:45.920918  204426 system_pods.go:89] "csi-hostpathplugin-xbqj8" [e0a331a6-3039-49e4-bc46-7f0bbfa14605] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1005 20:05:45.920924  204426 system_pods.go:89] "etcd-addons-127532" [cb00c17f-4be4-4fe7-9dea-328dff5d3fee] Running
	I1005 20:05:45.920931  204426 system_pods.go:89] "kube-apiserver-addons-127532" [9fa739ec-3939-4add-a824-55a5f9d8555a] Running
	I1005 20:05:45.920936  204426 system_pods.go:89] "kube-controller-manager-addons-127532" [c24ecfc0-1fe1-4843-9111-f24dc1b02a70] Running
	I1005 20:05:45.920944  204426 system_pods.go:89] "kube-ingress-dns-minikube" [410f45e6-cc5d-40f7-89f7-dc0d86c2f5c5] Running
	I1005 20:05:45.920948  204426 system_pods.go:89] "kube-proxy-8zmq5" [9c985012-8981-48a5-9788-0776c85ddc6c] Running
	I1005 20:05:45.920955  204426 system_pods.go:89] "kube-scheduler-addons-127532" [d3d61c1f-3088-483a-b070-5295e4aaca35] Running
	I1005 20:05:45.920960  204426 system_pods.go:89] "metrics-server-7c66d45ddc-p846v" [87d58e97-7364-4431-9e84-80f4247aa856] Running
	I1005 20:05:45.920967  204426 system_pods.go:89] "registry-88fwl" [59c90c75-e8df-40f9-9c0f-ef6b6e5f7c48] Running
	I1005 20:05:45.920972  204426 system_pods.go:89] "registry-proxy-rv8kw" [0faef84e-a6c6-4c66-972e-567277a6613b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1005 20:05:45.920977  204426 system_pods.go:89] "snapshot-controller-58dbcc7b99-6zzrx" [016395dd-3886-447c-85fc-f80131a6bcdf] Running
	I1005 20:05:45.920988  204426 system_pods.go:89] "snapshot-controller-58dbcc7b99-nzpgl" [8b98b53e-8ae6-4b67-ba00-7c7538206beb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1005 20:05:45.920996  204426 system_pods.go:89] "storage-provisioner" [556b5271-cb63-4b66-ba25-e71fe7e72237] Running
	I1005 20:05:45.921002  204426 system_pods.go:89] "tiller-deploy-7b677967b9-wpjtp" [a40dafcd-da9a-46c6-931f-65e21917673c] Running
	I1005 20:05:45.921014  204426 system_pods.go:126] duration metric: took 216.034724ms to wait for k8s-apps to be running ...
	I1005 20:05:45.921028  204426 system_svc.go:44] waiting for kubelet service to be running ....
	I1005 20:05:45.921084  204426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 20:05:45.950875  204426 system_svc.go:56] duration metric: took 29.829035ms WaitForService to wait for kubelet.
	I1005 20:05:45.950912  204426 kubeadm.go:581] duration metric: took 41.860984621s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1005 20:05:45.950941  204426 node_conditions.go:102] verifying NodePressure condition ...
	I1005 20:05:46.104769  204426 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1005 20:05:46.104874  204426 node_conditions.go:123] node cpu capacity is 2
	I1005 20:05:46.104893  204426 node_conditions.go:105] duration metric: took 153.944953ms to run NodePressure ...
	I1005 20:05:46.104910  204426 start.go:228] waiting for startup goroutines ...
	I1005 20:05:46.169235  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:46.175833  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:46.243815  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:46.328341  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:46.671662  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:46.677936  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:46.704736  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:46.828473  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:47.168655  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:47.173888  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:47.204429  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:47.328861  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:47.668738  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:47.673118  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:47.703381  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:47.829052  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:48.168736  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:48.174133  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:48.203617  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:48.329796  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:48.669489  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:48.676077  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:48.707254  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:48.827872  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:49.170015  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:49.183039  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:49.204925  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:49.328204  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:49.671227  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:49.674594  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:49.704967  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:49.829112  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:50.168339  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:50.176754  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:50.205048  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:50.328703  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:50.676093  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:50.679635  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:50.709239  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:50.828748  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:51.168804  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:51.174298  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:51.205289  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:51.330074  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:51.668978  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:51.674964  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:51.717766  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:51.833124  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:52.168089  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:52.174972  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:52.205789  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:52.335175  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:52.681526  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:52.683827  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:52.706004  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:52.838423  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:53.170056  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:53.174742  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:53.210150  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:53.329807  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:53.669172  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:53.674721  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:53.706831  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:53.828151  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:54.173073  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:54.181552  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:54.205328  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:54.329287  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:54.668758  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:54.679601  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:54.705279  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:54.829957  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:55.178957  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:55.181247  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:55.203743  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:55.332031  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:55.669224  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:55.674375  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:55.709740  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:55.830932  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:56.171323  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:56.173784  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:56.204938  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:56.327646  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:56.670017  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:56.675567  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:56.706992  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:56.828164  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:57.169697  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:57.180623  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:57.205535  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:57.329363  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:57.669547  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:57.679857  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:57.703598  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:57.829683  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:58.169395  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:58.174450  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:58.203979  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:58.328127  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:58.668942  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:58.673974  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:58.703468  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:58.829606  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:59.168041  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:59.174586  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:59.204484  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:59.328925  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:05:59.668509  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:05:59.678838  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:05:59.707138  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:05:59.828036  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:06:00.168371  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:00.175969  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:06:00.205074  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:00.327970  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:06:00.667885  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:00.677768  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 20:06:00.704581  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:00.843197  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:06:01.168819  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:01.180436  204426 kapi.go:107] duration metric: took 45.026493925s to wait for kubernetes.io/minikube-addons=registry ...
	I1005 20:06:01.206744  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:01.329351  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:06:01.671205  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:01.705155  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:01.828450  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:06:02.168462  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:02.204042  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:02.330158  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:06:02.668444  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:02.705570  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:02.832954  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:06:03.172259  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:03.206369  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:03.330550  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:06:03.675887  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:03.705640  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:03.828990  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:06:04.168385  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:04.204538  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:04.340287  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:06:04.668965  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:04.705720  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:04.828200  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:06:05.191602  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:05.204023  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:05.333339  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:06:05.670391  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:05.720537  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:05.831450  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:06:06.169656  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:06.205511  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:06.331515  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:06:06.669456  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:06.706313  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:06.828677  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:06:07.169946  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:07.205083  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:07.525357  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:06:07.668963  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:07.704867  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:07.828189  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:06:08.168154  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:08.203666  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:08.333085  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:06:08.668263  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:08.706642  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:08.828513  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 20:06:09.168199  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:09.203673  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:09.328100  204426 kapi.go:107] duration metric: took 50.160405763s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1005 20:06:09.670808  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:09.704545  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:10.172584  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:10.203970  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:10.671740  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:10.704854  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:11.168961  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:11.205640  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:11.670253  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:11.708758  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:12.168218  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:12.204033  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:12.667767  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:12.704598  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:13.169431  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:13.204583  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:13.669574  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:13.704976  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:14.168727  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:14.204099  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:14.668365  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:14.704045  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:15.168645  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:15.203711  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:15.667875  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:15.704383  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:16.169720  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:16.204814  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:16.668107  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:16.705634  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:17.169979  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:17.205889  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:17.668574  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:17.703973  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:18.168715  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:18.204104  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:18.668110  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:18.704495  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:19.168612  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:19.204134  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:19.671146  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:19.704230  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:20.168612  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:20.203892  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:20.671161  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:20.704168  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:21.169054  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:21.204484  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:21.668530  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:21.704587  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:22.172626  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:22.205581  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:22.668148  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:22.703564  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:23.167275  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:23.203700  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:23.668242  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:23.703066  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:24.654237  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:24.661635  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:24.673165  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:24.708546  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:25.168750  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:25.203739  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:25.667822  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:25.704819  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:26.168174  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:26.204082  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:26.668110  204426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 20:06:26.704207  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:27.167588  204426 kapi.go:107] duration metric: took 1m11.016884369s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1005 20:06:27.215231  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:27.704940  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:28.204532  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:28.887557  204426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 20:06:29.203657  204426 kapi.go:107] duration metric: took 1m7.041533187s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1005 20:06:29.205486  204426 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-127532 cluster.
	I1005 20:06:29.207068  204426 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1005 20:06:29.208493  204426 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1005 20:06:29.209952  204426 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, ingress-dns, storage-provisioner-rancher, helm-tiller, metrics-server, inspektor-gadget, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1005 20:06:29.211325  204426 addons.go:502] enable addons completed in 1m25.24429013s: enabled=[cloud-spanner storage-provisioner ingress-dns storage-provisioner-rancher helm-tiller metrics-server inspektor-gadget volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1005 20:06:29.211363  204426 start.go:233] waiting for cluster config update ...
	I1005 20:06:29.211380  204426 start.go:242] writing updated cluster config ...
	I1005 20:06:29.211738  204426 ssh_runner.go:195] Run: rm -f paused
	I1005 20:06:29.263820  204426 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1005 20:06:29.265673  204426 out.go:177] * Done! kubectl is now configured to use "addons-127532" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD
	d90a01e3a89ed       a416a98b71e22       Less than a second ago   Created             helper-pod                               0                   8043f7f90282a       helper-pod-delete-pvc-dc50f883-7edc-4880-b5eb-ddfd177dce7a
	7c5ddde55ceb1       beae173ccac6a       2 seconds ago            Exited              registry-test                            0                   a3d0fcd1c3ac0       registry-test
	da2bb3b65f5c3       a416a98b71e22       6 seconds ago            Exited              busybox                                  0                   f4292393b5edd       test-local-path
	b688bcf059754       98f6c3b32d565       6 seconds ago            Exited              helm-test                                0                   4a80410d81961       helm-test
	15a9b7d2e5ee1       a416a98b71e22       12 seconds ago           Exited              helper-pod                               0                   37632bd9e1472       helper-pod-create-pvc-dc50f883-7edc-4880-b5eb-ddfd177dce7a
	b4cf3a8e005ba       6d2a98b274382       15 seconds ago           Running             gcp-auth                                 0                   3f9494cd70edf       gcp-auth-d4c87556c-jn7jv
	5e35b1d030f43       e5b2b456d9f6b       17 seconds ago           Running             controller                               0                   6d7ba03237cb7       ingress-nginx-controller-5c4c674fdc-gm89w
	890d646ff6802       738351fd438f0       35 seconds ago           Running             csi-snapshotter                          0                   b7fc6a2f3240a       csi-hostpathplugin-xbqj8
	aa84f02a7c5aa       931dbfd16f87c       37 seconds ago           Running             csi-provisioner                          0                   b7fc6a2f3240a       csi-hostpathplugin-xbqj8
	803ff5fa81d57       e899260153aed       39 seconds ago           Running             liveness-probe                           0                   b7fc6a2f3240a       csi-hostpathplugin-xbqj8
	f716a81d349c6       e255e073c508c       40 seconds ago           Running             hostpath                                 0                   b7fc6a2f3240a       csi-hostpathplugin-xbqj8
	4b8d89a0883e6       88ef14a257f42       42 seconds ago           Running             node-driver-registrar                    0                   b7fc6a2f3240a       csi-hostpathplugin-xbqj8
	b7db52dff9ae8       7e7451bb70423       43 seconds ago           Exited              patch                                    2                   4bf50ff4bb2d4       ingress-nginx-admission-patch-9dv5t
	0964d7520ed7d       7e7451bb70423       43 seconds ago           Exited              patch                                    0                   081374620a0a1       gcp-auth-certs-patch-vwjqh
	47fc63c05f0a6       7e7451bb70423       43 seconds ago           Exited              create                                   0                   0429745ceb65c       gcp-auth-certs-create-6mmsw
	f3c99d94dc5f7       d2fd211e7dcaa       44 seconds ago           Running             registry-proxy                           0                   c513cbddaeef6       registry-proxy-rv8kw
	947c84648c70d       7e7451bb70423       49 seconds ago           Exited              create                                   0                   c9f8cea3e1296       ingress-nginx-admission-create-qj7g6
	80436625227c3       e16d1e3a10667       49 seconds ago           Running             local-path-provisioner                   0                   22a1dddc8697b       local-path-provisioner-78b46b4d5c-7z5mg
	cea6df26726a4       aa61ee9c70bc4       51 seconds ago           Running             volume-snapshot-controller               0                   30931ae2df1b1       snapshot-controller-58dbcc7b99-nzpgl
	a2f2abd3f0b46       a1ed5895ba635       51 seconds ago           Running             csi-external-health-monitor-controller   0                   b7fc6a2f3240a       csi-hostpathplugin-xbqj8
	ab85df507664e       b8291d369c93d       53 seconds ago           Running             gadget                                   0                   9d2c58fd5adfe       gadget-hkzrd
	d782967574d73       19a639eda60f0       About a minute ago       Running             csi-resizer                              0                   aa0cd6997d21b       csi-hostpath-resizer-0
	d32fa6ab29aeb       aa61ee9c70bc4       About a minute ago       Running             volume-snapshot-controller               0                   73a6a064785ba       snapshot-controller-58dbcc7b99-6zzrx
	1c3f3f61bf07c       59cbb42146a37       About a minute ago       Running             csi-attacher                             0                   d6c74380a6dd3       csi-hostpath-attacher-0
	fe470a37a89c5       3a0f7b0a13ef6       About a minute ago       Running             registry                                 0                   c6173631ef466       registry-88fwl
	c77114a1dcc23       6e38f40d628db       About a minute ago       Running             storage-provisioner                      0                   f2f0774bc22f1       storage-provisioner
	c21d1c73991b4       1499ed4fbd0aa       About a minute ago       Running             minikube-ingress-dns                     0                   138c2506a5b4e       kube-ingress-dns-minikube
	e7196076ec242       ead0a4a53df89       About a minute ago       Running             coredns                                  0                   bc41f417a76af       coredns-5dd5756b68-ht467
	7acc5650cb59c       c120fed2beb84       About a minute ago       Running             kube-proxy                               0                   32611f290ac10       kube-proxy-8zmq5
	55902ce9be722       7a5d9d67a13f6       2 minutes ago            Running             kube-scheduler                           0                   d6f868ecfb56e       kube-scheduler-addons-127532
	0ce6cd93e10aa       73deb9a3f7025       2 minutes ago            Running             etcd                                     0                   ae9612e5adf99       etcd-addons-127532
	08cdeaff99532       55f13c92defb1       2 minutes ago            Running             kube-controller-manager                  0                   06673ec429886       kube-controller-manager-addons-127532
	187b61c6228f6       cdcab12b2dd16       2 minutes ago            Running             kube-apiserver                           0                   ad39d22ffd30c       kube-apiserver-addons-127532
	
	* 
	* ==> containerd <==
	* -- Journal begins at Thu 2023-10-05 20:03:59 UTC, ends at Thu 2023-10-05 20:06:44 UTC. --
	Oct 05 20:06:43 addons-127532 containerd[689]: time="2023-10-05T20:06:43.081439848Z" level=info msg="StopPodSandbox for \"e688c2fe2b4f08929bce65988b68756e465072f3fae8a9174ab09a0ba22c7551\" returns successfully"
	Oct 05 20:06:43 addons-127532 containerd[689]: time="2023-10-05T20:06:43.087020012Z" level=info msg="StartContainer for \"7c5ddde55ceb1811f1e35a7354d75148cc5d506d13e1efbc0c58a7e80de50b06\" returns successfully"
	Oct 05 20:06:43 addons-127532 containerd[689]: time="2023-10-05T20:06:43.259894248Z" level=info msg="shim disconnected" id=bc37435dbc58d5a694ccd45365b80088ca649ddb956edc6a9940538aca0466de namespace=k8s.io
	Oct 05 20:06:43 addons-127532 containerd[689]: time="2023-10-05T20:06:43.259983112Z" level=warning msg="cleaning up after shim disconnected" id=bc37435dbc58d5a694ccd45365b80088ca649ddb956edc6a9940538aca0466de namespace=k8s.io
	Oct 05 20:06:43 addons-127532 containerd[689]: time="2023-10-05T20:06:43.259995769Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 05 20:06:43 addons-127532 containerd[689]: time="2023-10-05T20:06:43.336843973Z" level=info msg="shim disconnected" id=7c5ddde55ceb1811f1e35a7354d75148cc5d506d13e1efbc0c58a7e80de50b06 namespace=k8s.io
	Oct 05 20:06:43 addons-127532 containerd[689]: time="2023-10-05T20:06:43.341751369Z" level=warning msg="cleaning up after shim disconnected" id=7c5ddde55ceb1811f1e35a7354d75148cc5d506d13e1efbc0c58a7e80de50b06 namespace=k8s.io
	Oct 05 20:06:43 addons-127532 containerd[689]: time="2023-10-05T20:06:43.341964100Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 05 20:06:43 addons-127532 containerd[689]: time="2023-10-05T20:06:43.355944171Z" level=info msg="RemoveContainer for \"647ece1870de6fee356fa8d7707b29fb267cbc578540cadbba57660120831e27\""
	Oct 05 20:06:43 addons-127532 containerd[689]: time="2023-10-05T20:06:43.373095338Z" level=info msg="RemoveContainer for \"647ece1870de6fee356fa8d7707b29fb267cbc578540cadbba57660120831e27\" returns successfully"
	Oct 05 20:06:43 addons-127532 containerd[689]: time="2023-10-05T20:06:43.548597792Z" level=error msg="Attach for \"7c5ddde55ceb1811f1e35a7354d75148cc5d506d13e1efbc0c58a7e80de50b06\" failed" error="rpc error: code = InvalidArgument desc = tty and stderr cannot both be true"
	Oct 05 20:06:43 addons-127532 containerd[689]: time="2023-10-05T20:06:43.621254963Z" level=info msg="StopContainer for \"80436625227c372675bfad06f6d65d59af5f077eddc2a817ebc6edd155106fff\" with timeout 30 (s)"
	Oct 05 20:06:43 addons-127532 containerd[689]: time="2023-10-05T20:06:43.623939953Z" level=info msg="Stop container \"80436625227c372675bfad06f6d65d59af5f077eddc2a817ebc6edd155106fff\" with signal terminated"
	Oct 05 20:06:43 addons-127532 containerd[689]: time="2023-10-05T20:06:43.764157294Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 05 20:06:43 addons-127532 containerd[689]: time="2023-10-05T20:06:43.764806658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 05 20:06:43 addons-127532 containerd[689]: time="2023-10-05T20:06:43.764839104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 05 20:06:43 addons-127532 containerd[689]: time="2023-10-05T20:06:43.764853529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 05 20:06:43 addons-127532 containerd[689]: time="2023-10-05T20:06:43.784504103Z" level=info msg="TearDown network for sandbox \"bc37435dbc58d5a694ccd45365b80088ca649ddb956edc6a9940538aca0466de\" successfully"
	Oct 05 20:06:43 addons-127532 containerd[689]: time="2023-10-05T20:06:43.784747350Z" level=info msg="StopPodSandbox for \"bc37435dbc58d5a694ccd45365b80088ca649ddb956edc6a9940538aca0466de\" returns successfully"
	Oct 05 20:06:44 addons-127532 containerd[689]: time="2023-10-05T20:06:44.388403098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:helper-pod-delete-pvc-dc50f883-7edc-4880-b5eb-ddfd177dce7a,Uid:8f976fdf-30ae-4f0c-9090-0a00f59d3b9e,Namespace:local-path-storage,Attempt:0,} returns sandbox id \"8043f7f90282ac347d0c50c3d294d0fe212f523938ab223a07104bd8e03145ed\""
	Oct 05 20:06:44 addons-127532 containerd[689]: time="2023-10-05T20:06:44.445763888Z" level=info msg="CreateContainer within sandbox \"8043f7f90282ac347d0c50c3d294d0fe212f523938ab223a07104bd8e03145ed\" for container &ContainerMetadata{Name:helper-pod,Attempt:0,}"
	Oct 05 20:06:44 addons-127532 containerd[689]: time="2023-10-05T20:06:44.454822318Z" level=info msg="RemoveContainer for \"47bb3532af7dda6d6f48404584d33d777ac559c10e71f66c58f11bd57b4beb52\""
	Oct 05 20:06:44 addons-127532 containerd[689]: time="2023-10-05T20:06:44.494108629Z" level=info msg="RemoveContainer for \"47bb3532af7dda6d6f48404584d33d777ac559c10e71f66c58f11bd57b4beb52\" returns successfully"
	Oct 05 20:06:44 addons-127532 containerd[689]: time="2023-10-05T20:06:44.511497545Z" level=info msg="CreateContainer within sandbox \"8043f7f90282ac347d0c50c3d294d0fe212f523938ab223a07104bd8e03145ed\" for &ContainerMetadata{Name:helper-pod,Attempt:0,} returns container id \"d90a01e3a89ed2cbf2bcc38f3d843807942d5c08faecee3391fede6b034b8a5e\""
	Oct 05 20:06:44 addons-127532 containerd[689]: time="2023-10-05T20:06:44.512892387Z" level=info msg="StartContainer for \"d90a01e3a89ed2cbf2bcc38f3d843807942d5c08faecee3391fede6b034b8a5e\""
	
	* 
	* ==> coredns [e7196076ec2424635bdbab5f6ad8cc4801bbb3c985c15ba13b2ae7011fdf4e2a] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:35957 - 18973 "HINFO IN 7631298450489257716.7693694892864692671. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030965307s
	[INFO] 10.244.0.19:36082 - 40520 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000261262s
	[INFO] 10.244.0.19:41336 - 56889 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000176211s
	[INFO] 10.244.0.19:54677 - 26277 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000129227s
	[INFO] 10.244.0.19:37769 - 44585 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000084979s
	[INFO] 10.244.0.19:37867 - 1799 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000062836s
	[INFO] 10.244.0.19:50474 - 23947 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00006805s
	[INFO] 10.244.0.19:39676 - 59157 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00069342s
	[INFO] 10.244.0.19:32913 - 41803 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000402582s
	[INFO] 10.244.0.23:34350 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000529001s
	[INFO] 10.244.0.23:39147 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000135438s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-127532
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-127532
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=300d55cee86053f5b4c7a654fc8e7b9d3c030d53
	                    minikube.k8s.io/name=addons-127532
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_05T20_04_51_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-127532
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-127532"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Oct 2023 20:04:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-127532
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Oct 2023 20:06:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Oct 2023 20:06:23 +0000   Thu, 05 Oct 2023 20:04:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Oct 2023 20:06:23 +0000   Thu, 05 Oct 2023 20:04:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Oct 2023 20:06:23 +0000   Thu, 05 Oct 2023 20:04:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Oct 2023 20:06:23 +0000   Thu, 05 Oct 2023 20:04:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.191
	  Hostname:    addons-127532
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914504Ki
	  pods:               110
	System Info:
	  Machine ID:                 1b48f9d59a0e4c53962f12559fdeaefb
	  System UUID:                1b48f9d5-9a0e-4c53-962f-12559fdeaefb
	  Boot ID:                    0be53af3-9c88-4c16-b166-0064924fb992
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     registry-test                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  gadget                      gadget-hkzrd                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  gcp-auth                    gcp-auth-d4c87556c-jn7jv                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  ingress-nginx               ingress-nginx-controller-5c4c674fdc-gm89w                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         90s
	  kube-system                 coredns-5dd5756b68-ht467                                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     102s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                 csi-hostpathplugin-xbqj8                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 etcd-addons-127532                                            100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         116s
	  kube-system                 kube-apiserver-addons-127532                                  250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         114s
	  kube-system                 kube-controller-manager-addons-127532                         200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         115s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  kube-system                 kube-proxy-8zmq5                                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  kube-system                 kube-scheduler-addons-127532                                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         114s
	  kube-system                 registry-88fwl                                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 registry-proxy-rv8kw                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 snapshot-controller-58dbcc7b99-6zzrx                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 snapshot-controller-58dbcc7b99-nzpgl                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 storage-provisioner                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  local-path-storage          helper-pod-delete-pvc-dc50f883-7edc-4880-b5eb-ddfd177dce7a    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  local-path-storage          local-path-provisioner-78b46b4d5c-7z5mg                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             260Mi (6%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 99s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m3s (x8 over 2m3s)  kubelet          Node addons-127532 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s (x8 over 2m3s)  kubelet          Node addons-127532 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s (x7 over 2m3s)  kubelet          Node addons-127532 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 114s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  114s                 kubelet          Node addons-127532 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s                 kubelet          Node addons-127532 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s                 kubelet          Node addons-127532 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  114s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                114s                 kubelet          Node addons-127532 status is now: NodeReady
	  Normal  RegisteredNode           103s                 node-controller  Node addons-127532 event: Registered Node addons-127532 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.099921] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.586755] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.920666] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.149544] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Oct 5 20:04] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000023] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.199115] systemd-fstab-generator[558]: Ignoring "noauto" for root device
	[  +0.117190] systemd-fstab-generator[569]: Ignoring "noauto" for root device
	[  +0.146399] systemd-fstab-generator[582]: Ignoring "noauto" for root device
	[  +0.116026] systemd-fstab-generator[593]: Ignoring "noauto" for root device
	[  +0.265438] systemd-fstab-generator[620]: Ignoring "noauto" for root device
	[  +6.552476] systemd-fstab-generator[680]: Ignoring "noauto" for root device
	[ +21.851393] systemd-fstab-generator[987]: Ignoring "noauto" for root device
	[  +9.284233] systemd-fstab-generator[1342]: Ignoring "noauto" for root device
	[Oct 5 20:05] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.097369] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.020375] kauditd_printk_skb: 56 callbacks suppressed
	[ +12.848127] kauditd_printk_skb: 16 callbacks suppressed
	[ +24.947143] kauditd_printk_skb: 14 callbacks suppressed
	[Oct 5 20:06] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.084589] kauditd_printk_skb: 3 callbacks suppressed
	[  +7.870484] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.649054] kauditd_printk_skb: 12 callbacks suppressed
	
	* 
	* ==> etcd [0ce6cd93e10aa4dfe56925f84f155e2b521bd5989658e0026b7902356f03b7ad] <==
	* {"level":"info","ts":"2023-10-05T20:05:44.662632Z","caller":"traceutil/trace.go:171","msg":"trace[1673127484] transaction","detail":"{read_only:false; response_revision:939; number_of_response:1; }","duration":"261.061494ms","start":"2023-10-05T20:05:44.401548Z","end":"2023-10-05T20:05:44.662609Z","steps":["trace[1673127484] 'process raft request'  (duration: 260.953408ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-05T20:05:44.670745Z","caller":"traceutil/trace.go:171","msg":"trace[563752755] transaction","detail":"{read_only:false; response_revision:940; number_of_response:1; }","duration":"249.003407ms","start":"2023-10-05T20:05:44.421721Z","end":"2023-10-05T20:05:44.670724Z","steps":["trace[563752755] 'process raft request'  (duration: 248.554478ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-05T20:06:07.515568Z","caller":"traceutil/trace.go:171","msg":"trace[814810126] linearizableReadLoop","detail":"{readStateIndex:1105; appliedIndex:1104; }","duration":"190.385712ms","start":"2023-10-05T20:06:07.325162Z","end":"2023-10-05T20:06:07.515548Z","steps":["trace[814810126] 'read index received'  (duration: 190.049035ms)","trace[814810126] 'applied index is now lower than readState.Index'  (duration: 336.128µs)"],"step_count":2}
	{"level":"warn","ts":"2023-10-05T20:06:07.516024Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"190.742562ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:17 size:78037"}
	{"level":"info","ts":"2023-10-05T20:06:07.51609Z","caller":"traceutil/trace.go:171","msg":"trace[1684736230] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:17; response_revision:1073; }","duration":"190.960272ms","start":"2023-10-05T20:06:07.32512Z","end":"2023-10-05T20:06:07.51608Z","steps":["trace[1684736230] 'agreement among raft nodes before linearized reading'  (duration: 190.605595ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-05T20:06:07.516763Z","caller":"traceutil/trace.go:171","msg":"trace[1782369080] transaction","detail":"{read_only:false; response_revision:1073; number_of_response:1; }","duration":"195.163093ms","start":"2023-10-05T20:06:07.321577Z","end":"2023-10-05T20:06:07.516741Z","steps":["trace[1782369080] 'process raft request'  (duration: 193.693133ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-05T20:06:24.641989Z","caller":"traceutil/trace.go:171","msg":"trace[136518638] linearizableReadLoop","detail":"{readStateIndex:1138; appliedIndex:1137; }","duration":"479.281207ms","start":"2023-10-05T20:06:24.162678Z","end":"2023-10-05T20:06:24.641959Z","steps":["trace[136518638] 'read index received'  (duration: 478.890194ms)","trace[136518638] 'applied index is now lower than readState.Index'  (duration: 390.124µs)"],"step_count":2}
	{"level":"info","ts":"2023-10-05T20:06:24.642691Z","caller":"traceutil/trace.go:171","msg":"trace[1575740469] transaction","detail":"{read_only:false; response_revision:1103; number_of_response:1; }","duration":"484.189286ms","start":"2023-10-05T20:06:24.158486Z","end":"2023-10-05T20:06:24.642675Z","steps":["trace[1575740469] 'process raft request'  (duration: 483.239099ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-05T20:06:24.643684Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"480.781346ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13762"}
	{"level":"info","ts":"2023-10-05T20:06:24.644058Z","caller":"traceutil/trace.go:171","msg":"trace[126253279] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1103; }","duration":"481.3975ms","start":"2023-10-05T20:06:24.162649Z","end":"2023-10-05T20:06:24.644046Z","steps":["trace[126253279] 'agreement among raft nodes before linearized reading'  (duration: 480.164354ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-05T20:06:24.644221Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-05T20:06:24.162634Z","time spent":"481.568758ms","remote":"127.0.0.1:37962","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":13785,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"warn","ts":"2023-10-05T20:06:24.646469Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-05T20:06:24.158463Z","time spent":"485.095077ms","remote":"127.0.0.1:37954","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1100 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2023-10-05T20:06:24.653847Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"409.068904ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-05T20:06:24.654095Z","caller":"traceutil/trace.go:171","msg":"trace[2017807080] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1103; }","duration":"409.413598ms","start":"2023-10-05T20:06:24.244669Z","end":"2023-10-05T20:06:24.654083Z","steps":["trace[2017807080] 'agreement among raft nodes before linearized reading'  (duration: 402.229397ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-05T20:06:24.654268Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-05T20:06:24.244654Z","time spent":"409.603459ms","remote":"127.0.0.1:37920","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2023-10-05T20:06:24.654Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"454.895945ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10849"}
	{"level":"info","ts":"2023-10-05T20:06:24.654568Z","caller":"traceutil/trace.go:171","msg":"trace[666228382] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1103; }","duration":"455.468688ms","start":"2023-10-05T20:06:24.199088Z","end":"2023-10-05T20:06:24.654557Z","steps":["trace[666228382] 'agreement among raft nodes before linearized reading'  (duration: 447.600519ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-05T20:06:24.654753Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-05T20:06:24.199075Z","time spent":"455.66705ms","remote":"127.0.0.1:37962","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":10872,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"info","ts":"2023-10-05T20:06:28.875585Z","caller":"traceutil/trace.go:171","msg":"trace[79850264] transaction","detail":"{read_only:false; response_revision:1119; number_of_response:1; }","duration":"198.742809ms","start":"2023-10-05T20:06:28.676828Z","end":"2023-10-05T20:06:28.875571Z","steps":["trace[79850264] 'process raft request'  (duration: 198.654327ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-05T20:06:28.880245Z","caller":"traceutil/trace.go:171","msg":"trace[1320653100] linearizableReadLoop","detail":"{readStateIndex:1156; appliedIndex:1155; }","duration":"182.420947ms","start":"2023-10-05T20:06:28.697813Z","end":"2023-10-05T20:06:28.880234Z","steps":["trace[1320653100] 'read index received'  (duration: 178.872536ms)","trace[1320653100] 'applied index is now lower than readState.Index'  (duration: 3.547761ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-05T20:06:28.880519Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.708169ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10849"}
	{"level":"info","ts":"2023-10-05T20:06:28.880571Z","caller":"traceutil/trace.go:171","msg":"trace[1498457370] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1121; }","duration":"182.773154ms","start":"2023-10-05T20:06:28.697791Z","end":"2023-10-05T20:06:28.880564Z","steps":["trace[1498457370] 'agreement among raft nodes before linearized reading'  (duration: 182.625867ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-05T20:06:28.880756Z","caller":"traceutil/trace.go:171","msg":"trace[380149347] transaction","detail":"{read_only:false; response_revision:1120; number_of_response:1; }","duration":"188.779075ms","start":"2023-10-05T20:06:28.69197Z","end":"2023-10-05T20:06:28.880749Z","steps":["trace[380149347] 'process raft request'  (duration: 188.100404ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-05T20:06:28.880931Z","caller":"traceutil/trace.go:171","msg":"trace[1090670038] transaction","detail":"{read_only:false; response_revision:1121; number_of_response:1; }","duration":"127.284552ms","start":"2023-10-05T20:06:28.753639Z","end":"2023-10-05T20:06:28.880924Z","steps":["trace[1090670038] 'process raft request'  (duration: 126.564834ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-05T20:06:40.44164Z","caller":"traceutil/trace.go:171","msg":"trace[907534033] transaction","detail":"{read_only:false; response_revision:1238; number_of_response:1; }","duration":"115.469513ms","start":"2023-10-05T20:06:40.326144Z","end":"2023-10-05T20:06:40.441613Z","steps":["trace[907534033] 'process raft request'  (duration: 107.323131ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [b4cf3a8e005bae76aa05c4d88da948703de19ab2b140c1889fbd2427ee88781b] <==
	* 2023/10/05 20:06:29 GCP Auth Webhook started!
	2023/10/05 20:06:30 Ready to marshal response ...
	2023/10/05 20:06:30 Ready to write response ...
	2023/10/05 20:06:30 Ready to marshal response ...
	2023/10/05 20:06:30 Ready to write response ...
	2023/10/05 20:06:34 Ready to marshal response ...
	2023/10/05 20:06:34 Ready to write response ...
	2023/10/05 20:06:39 Ready to marshal response ...
	2023/10/05 20:06:39 Ready to write response ...
	2023/10/05 20:06:42 Ready to marshal response ...
	2023/10/05 20:06:42 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  20:06:45 up 2 min,  0 users,  load average: 3.99, 1.98, 0.77
	Linux addons-127532 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [187b61c6228f6aeb260635d7ce329dc74124c163ccf24a73fd7798ba26589330] <==
	* W1005 20:05:14.402254       1 handler_proxy.go:93] no RequestInfo found in the context
	E1005 20:05:14.402277       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1005 20:05:14.405863       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1005 20:05:15.823276       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller" clusterIPs={"IPv4":"10.101.238.118"}
	I1005 20:05:15.864536       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller-admission" clusterIPs={"IPv4":"10.104.183.132"}
	I1005 20:05:15.933701       1 controller.go:624] quota admission added evaluator for: jobs.batch
	W1005 20:05:17.332824       1 aggregator.go:165] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1005 20:05:18.649092       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.97.134.4"}
	I1005 20:05:18.689607       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I1005 20:05:18.913116       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.102.54.75"}
	W1005 20:05:20.710391       1 aggregator.go:165] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1005 20:05:21.779136       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.102.247.190"}
	E1005 20:05:43.916687       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.244.121:443/apis/metrics.k8s.io/v1beta1: Get "https://10.98.244.121:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.98.244.121:443: connect: connection refused
	W1005 20:05:43.916995       1 handler_proxy.go:93] no RequestInfo found in the context
	E1005 20:05:43.917063       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E1005 20:05:43.917639       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.244.121:443/apis/metrics.k8s.io/v1beta1: Get "https://10.98.244.121:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.98.244.121:443: connect: connection refused
	I1005 20:05:43.918849       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1005 20:05:43.923812       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.244.121:443/apis/metrics.k8s.io/v1beta1: Get "https://10.98.244.121:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.98.244.121:443: connect: connection refused
	E1005 20:05:43.947178       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.244.121:443/apis/metrics.k8s.io/v1beta1: Get "https://10.98.244.121:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.98.244.121:443: connect: connection refused
	I1005 20:05:44.098438       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1005 20:05:47.405200       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1005 20:06:44.932602       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	* 
	* ==> kube-controller-manager [08cdeaff99532ba296326a2927fe09ca25ba8f2c98f9a41c464987d1c6c67d70] <==
	* I1005 20:06:04.316302       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I1005 20:06:04.317402       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I1005 20:06:04.326731       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I1005 20:06:04.354287       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I1005 20:06:04.370962       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I1005 20:06:04.371591       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I1005 20:06:11.697650       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="11.531267ms"
	I1005 20:06:11.699378       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="94.145µs"
	I1005 20:06:27.077150       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-5c4c674fdc" duration="152.515µs"
	I1005 20:06:29.099833       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="15.046576ms"
	I1005 20:06:29.100461       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="191.285µs"
	I1005 20:06:29.945994       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I1005 20:06:30.152468       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1005 20:06:32.841014       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1005 20:06:34.021581       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I1005 20:06:34.029739       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I1005 20:06:34.101403       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I1005 20:06:34.107755       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I1005 20:06:35.238099       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-7c66d45ddc" duration="8.875µs"
	I1005 20:06:40.289647       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-5c4c674fdc" duration="42.657965ms"
	I1005 20:06:40.293984       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-5c4c674fdc" duration="82.891µs"
	I1005 20:06:41.321032       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/cloud-spanner-emulator-7d49f968d9" duration="6.786µs"
	I1005 20:06:41.658993       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/tiller-deploy-7b677967b9" duration="8.045µs"
	I1005 20:06:42.914879       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1005 20:06:43.533609       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-78b46b4d5c" duration="7.115µs"
	
	* 
	* ==> kube-proxy [7acc5650cb59c003f57c4d5efc042886275bc492435f7922d35821dc205b7119] <==
	* I1005 20:05:05.439046       1 server_others.go:69] "Using iptables proxy"
	I1005 20:05:05.454145       1 node.go:141] Successfully retrieved node IP: 192.168.39.191
	I1005 20:05:05.682624       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1005 20:05:05.682646       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1005 20:05:05.695712       1 server_others.go:152] "Using iptables Proxier"
	I1005 20:05:05.695772       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1005 20:05:05.696064       1 server.go:846] "Version info" version="v1.28.2"
	I1005 20:05:05.696086       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1005 20:05:05.697874       1 config.go:188] "Starting service config controller"
	I1005 20:05:05.697887       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1005 20:05:05.697905       1 config.go:97] "Starting endpoint slice config controller"
	I1005 20:05:05.697911       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1005 20:05:05.698302       1 config.go:315] "Starting node config controller"
	I1005 20:05:05.698389       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1005 20:05:05.798608       1 shared_informer.go:318] Caches are synced for node config
	I1005 20:05:05.798637       1 shared_informer.go:318] Caches are synced for service config
	I1005 20:05:05.798659       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [55902ce9be72256c404d7ea303be500c40e8a1aacb9345858cc19717023d93a7] <==
	* W1005 20:04:47.555009       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1005 20:04:47.555410       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1005 20:04:48.425544       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1005 20:04:48.425856       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1005 20:04:48.437885       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1005 20:04:48.438029       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1005 20:04:48.466995       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1005 20:04:48.467141       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1005 20:04:48.585698       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1005 20:04:48.586042       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1005 20:04:48.650263       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1005 20:04:48.650620       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1005 20:04:48.691454       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1005 20:04:48.693126       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1005 20:04:48.695194       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1005 20:04:48.695542       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1005 20:04:48.713716       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1005 20:04:48.714008       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1005 20:04:48.832807       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1005 20:04:48.832859       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1005 20:04:48.846690       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1005 20:04:48.846957       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1005 20:04:48.897069       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1005 20:04:48.897283       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1005 20:04:51.644576       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-10-05 20:03:59 UTC, ends at Thu 2023-10-05 20:06:45 UTC. --
	Oct 05 20:06:41 addons-127532 kubelet[1349]: I1005 20:06:41.173953    1349 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4292393b5edd306bb9928fc27c68986de73bd0b794bfe5320af58244a7f1629"
	Oct 05 20:06:41 addons-127532 kubelet[1349]: I1005 20:06:41.184032    1349 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a80410d81961724cbdb72c3f57dfceb1f4630e89c1108bf33058f4206577858"
	Oct 05 20:06:41 addons-127532 kubelet[1349]: I1005 20:06:41.385561    1349 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7390c2b6-cd54-4224-b5c6-08a22a9e0a88" path="/var/lib/kubelet/pods/7390c2b6-cd54-4224-b5c6-08a22a9e0a88/volumes"
	Oct 05 20:06:42 addons-127532 kubelet[1349]: I1005 20:06:42.456895    1349 topology_manager.go:215] "Topology Admit Handler" podUID="8f976fdf-30ae-4f0c-9090-0a00f59d3b9e" podNamespace="local-path-storage" podName="helper-pod-delete-pvc-dc50f883-7edc-4880-b5eb-ddfd177dce7a"
	Oct 05 20:06:42 addons-127532 kubelet[1349]: E1005 20:06:42.457556    1349 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7390c2b6-cd54-4224-b5c6-08a22a9e0a88" containerName="helm-test"
	Oct 05 20:06:42 addons-127532 kubelet[1349]: E1005 20:06:42.457678    1349 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="16cd4cae-7260-4762-a7e0-3797abef56ec" containerName="busybox"
	Oct 05 20:06:42 addons-127532 kubelet[1349]: I1005 20:06:42.457906    1349 memory_manager.go:346] "RemoveStaleState removing state" podUID="16cd4cae-7260-4762-a7e0-3797abef56ec" containerName="busybox"
	Oct 05 20:06:42 addons-127532 kubelet[1349]: I1005 20:06:42.457989    1349 memory_manager.go:346] "RemoveStaleState removing state" podUID="7390c2b6-cd54-4224-b5c6-08a22a9e0a88" containerName="helm-test"
	Oct 05 20:06:42 addons-127532 kubelet[1349]: I1005 20:06:42.548679    1349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr5gc\" (UniqueName: \"kubernetes.io/projected/8f976fdf-30ae-4f0c-9090-0a00f59d3b9e-kube-api-access-qr5gc\") pod \"helper-pod-delete-pvc-dc50f883-7edc-4880-b5eb-ddfd177dce7a\" (UID: \"8f976fdf-30ae-4f0c-9090-0a00f59d3b9e\") " pod="local-path-storage/helper-pod-delete-pvc-dc50f883-7edc-4880-b5eb-ddfd177dce7a"
	Oct 05 20:06:42 addons-127532 kubelet[1349]: I1005 20:06:42.548775    1349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/8f976fdf-30ae-4f0c-9090-0a00f59d3b9e-gcp-creds\") pod \"helper-pod-delete-pvc-dc50f883-7edc-4880-b5eb-ddfd177dce7a\" (UID: \"8f976fdf-30ae-4f0c-9090-0a00f59d3b9e\") " pod="local-path-storage/helper-pod-delete-pvc-dc50f883-7edc-4880-b5eb-ddfd177dce7a"
	Oct 05 20:06:42 addons-127532 kubelet[1349]: I1005 20:06:42.548807    1349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/8f976fdf-30ae-4f0c-9090-0a00f59d3b9e-script\") pod \"helper-pod-delete-pvc-dc50f883-7edc-4880-b5eb-ddfd177dce7a\" (UID: \"8f976fdf-30ae-4f0c-9090-0a00f59d3b9e\") " pod="local-path-storage/helper-pod-delete-pvc-dc50f883-7edc-4880-b5eb-ddfd177dce7a"
	Oct 05 20:06:42 addons-127532 kubelet[1349]: I1005 20:06:42.548827    1349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/8f976fdf-30ae-4f0c-9090-0a00f59d3b9e-data\") pod \"helper-pod-delete-pvc-dc50f883-7edc-4880-b5eb-ddfd177dce7a\" (UID: \"8f976fdf-30ae-4f0c-9090-0a00f59d3b9e\") " pod="local-path-storage/helper-pod-delete-pvc-dc50f883-7edc-4880-b5eb-ddfd177dce7a"
	Oct 05 20:06:43 addons-127532 kubelet[1349]: I1005 20:06:43.265560    1349 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l8cm\" (UniqueName: \"kubernetes.io/projected/1b6b1557-289f-4ad0-af09-514681a85adf-kube-api-access-5l8cm\") pod \"1b6b1557-289f-4ad0-af09-514681a85adf\" (UID: \"1b6b1557-289f-4ad0-af09-514681a85adf\") "
	Oct 05 20:06:43 addons-127532 kubelet[1349]: I1005 20:06:43.277926    1349 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b6b1557-289f-4ad0-af09-514681a85adf-kube-api-access-5l8cm" (OuterVolumeSpecName: "kube-api-access-5l8cm") pod "1b6b1557-289f-4ad0-af09-514681a85adf" (UID: "1b6b1557-289f-4ad0-af09-514681a85adf"). InnerVolumeSpecName "kube-api-access-5l8cm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 05 20:06:43 addons-127532 kubelet[1349]: I1005 20:06:43.282644    1349 scope.go:117] "RemoveContainer" containerID="647ece1870de6fee356fa8d7707b29fb267cbc578540cadbba57660120831e27"
	Oct 05 20:06:43 addons-127532 kubelet[1349]: I1005 20:06:43.427587    1349 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5l8cm\" (UniqueName: \"kubernetes.io/projected/1b6b1557-289f-4ad0-af09-514681a85adf-kube-api-access-5l8cm\") on node \"addons-127532\" DevicePath \"\""
	Oct 05 20:06:43 addons-127532 kubelet[1349]: I1005 20:06:43.457938    1349 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/registry-test" podStartSLOduration=3.326141763 podCreationTimestamp="2023-10-05 20:06:39 +0000 UTC" firstStartedPulling="2023-10-05 20:06:40.792445943 +0000 UTC m=+109.741711131" lastFinishedPulling="2023-10-05 20:06:41.924188488 +0000 UTC m=+110.873453677" observedRunningTime="2023-10-05 20:06:43.454585386 +0000 UTC m=+112.403850594" watchObservedRunningTime="2023-10-05 20:06:43.457884309 +0000 UTC m=+112.407149498"
	Oct 05 20:06:43 addons-127532 kubelet[1349]: I1005 20:06:43.497364    1349 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="16cd4cae-7260-4762-a7e0-3797abef56ec" path="/var/lib/kubelet/pods/16cd4cae-7260-4762-a7e0-3797abef56ec/volumes"
	Oct 05 20:06:43 addons-127532 kubelet[1349]: E1005 20:06:43.562237    1349 remote_runtime.go:557] "Attach container from runtime service failed" err="rpc error: code = InvalidArgument desc = tty and stderr cannot both be true" containerID="7c5ddde55ceb1811f1e35a7354d75148cc5d506d13e1efbc0c58a7e80de50b06"
	Oct 05 20:06:43 addons-127532 kubelet[1349]: I1005 20:06:43.841401    1349 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5h4h\" (UniqueName: \"kubernetes.io/projected/a40dafcd-da9a-46c6-931f-65e21917673c-kube-api-access-p5h4h\") pod \"a40dafcd-da9a-46c6-931f-65e21917673c\" (UID: \"a40dafcd-da9a-46c6-931f-65e21917673c\") "
	Oct 05 20:06:43 addons-127532 kubelet[1349]: I1005 20:06:43.844404    1349 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a40dafcd-da9a-46c6-931f-65e21917673c-kube-api-access-p5h4h" (OuterVolumeSpecName: "kube-api-access-p5h4h") pod "a40dafcd-da9a-46c6-931f-65e21917673c" (UID: "a40dafcd-da9a-46c6-931f-65e21917673c"). InnerVolumeSpecName "kube-api-access-p5h4h". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 05 20:06:43 addons-127532 kubelet[1349]: I1005 20:06:43.942900    1349 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-p5h4h\" (UniqueName: \"kubernetes.io/projected/a40dafcd-da9a-46c6-931f-65e21917673c-kube-api-access-p5h4h\") on node \"addons-127532\" DevicePath \"\""
	Oct 05 20:06:44 addons-127532 kubelet[1349]: I1005 20:06:44.407846    1349 scope.go:117] "RemoveContainer" containerID="47bb3532af7dda6d6f48404584d33d777ac559c10e71f66c58f11bd57b4beb52"
	Oct 05 20:06:45 addons-127532 kubelet[1349]: I1005 20:06:45.349463    1349 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1b6b1557-289f-4ad0-af09-514681a85adf" path="/var/lib/kubelet/pods/1b6b1557-289f-4ad0-af09-514681a85adf/volumes"
	Oct 05 20:06:45 addons-127532 kubelet[1349]: I1005 20:06:45.350618    1349 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a40dafcd-da9a-46c6-931f-65e21917673c" path="/var/lib/kubelet/pods/a40dafcd-da9a-46c6-931f-65e21917673c/volumes"
	
	* 
	* ==> storage-provisioner [c77114a1dcc2304241aa0e3e2ccc474ff92ef13a44d7c07dbba3a5630e742864] <==
	* I1005 20:05:33.571200       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1005 20:05:33.593978       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1005 20:05:33.594074       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1005 20:05:33.614871       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1005 20:05:33.616880       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"542a975d-3bfc-441e-86a7-8d945436228c", APIVersion:"v1", ResourceVersion:"872", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-127532_f88edd5e-ac0d-4898-964e-906541854456 became leader
	I1005 20:05:33.619284       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-127532_f88edd5e-ac0d-4898-964e-906541854456!
	I1005 20:05:33.731794       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-127532_f88edd5e-ac0d-4898-964e-906541854456!
	E1005 20:06:42.205792       1 controller.go:1050] claim "dc50f883-7edc-4880-b5eb-ddfd177dce7a" in work queue no longer exists
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-127532 -n addons-127532
2023/10/05 20:06:45 [DEBUG] GET http://192.168.39.191:5000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-127532 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-qj7g6 ingress-nginx-admission-patch-9dv5t helper-pod-delete-pvc-dc50f883-7edc-4880-b5eb-ddfd177dce7a
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-127532 describe pod ingress-nginx-admission-create-qj7g6 ingress-nginx-admission-patch-9dv5t helper-pod-delete-pvc-dc50f883-7edc-4880-b5eb-ddfd177dce7a
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-127532 describe pod ingress-nginx-admission-create-qj7g6 ingress-nginx-admission-patch-9dv5t helper-pod-delete-pvc-dc50f883-7edc-4880-b5eb-ddfd177dce7a: exit status 1 (82.018457ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-qj7g6" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-9dv5t" not found
	Error from server (NotFound): pods "helper-pod-delete-pvc-dc50f883-7edc-4880-b5eb-ddfd177dce7a" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-127532 describe pod ingress-nginx-admission-create-qj7g6 ingress-nginx-admission-patch-9dv5t helper-pod-delete-pvc-dc50f883-7edc-4880-b5eb-ddfd177dce7a: exit status 1
--- FAIL: TestAddons/parallel/Headlamp (4.82s)

                                                
                                    
x
+
TestErrorSpam/setup (63.3s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-363911 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-363911 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-363911 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-363911 --driver=kvm2  --container-runtime=containerd: (1m3.299996383s)
error_spam_test.go:96: unexpected stderr: "X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17363-196818/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9: no such file or directory"
error_spam_test.go:110: minikube stdout:
* [nospam-363911] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=17363
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/17363-196818/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-196818/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the kvm2 driver based on user configuration
* Starting control plane node nospam-363911 in cluster nospam-363911
* Creating kvm2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.28.2 on containerd 1.7.6 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-363911" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17363-196818/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9: no such file or directory
--- FAIL: TestErrorSpam/setup (63.30s)

                                                
                                    

Test pass (267/305)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 43.16
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.28.2/json-events 5.63
11 TestDownloadOnly/v1.28.2/preload-exists 0
15 TestDownloadOnly/v1.28.2/LogsDuration 0.07
16 TestDownloadOnly/DeleteAll 0.15
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.13
19 TestBinaryMirror 0.58
20 TestOffline 109.94
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
25 TestAddons/Setup 163.57
27 TestAddons/parallel/Registry 17.37
28 TestAddons/parallel/Ingress 23.03
29 TestAddons/parallel/InspektorGadget 11.03
30 TestAddons/parallel/MetricsServer 6.27
31 TestAddons/parallel/HelmTiller 12.79
33 TestAddons/parallel/CSI 53.83
35 TestAddons/parallel/CloudSpanner 5.92
36 TestAddons/parallel/LocalPath 57.05
39 TestAddons/serial/GCPAuth/Namespaces 0.14
40 TestAddons/StoppedEnableDisable 92.53
41 TestCertOptions 101.58
42 TestCertExpiration 330.38
44 TestForceSystemdFlag 110.95
45 TestForceSystemdEnv 69.74
47 TestKVMDriverInstallOrUpdate 3.21
52 TestErrorSpam/start 0.34
53 TestErrorSpam/status 0.74
54 TestErrorSpam/pause 1.51
55 TestErrorSpam/unpause 1.65
56 TestErrorSpam/stop 2.21
59 TestFunctional/serial/CopySyncFile 0
60 TestFunctional/serial/StartWithProxy 75.79
61 TestFunctional/serial/AuditLog 0
62 TestFunctional/serial/SoftStart 39.72
63 TestFunctional/serial/KubeContext 0.04
64 TestFunctional/serial/KubectlGetPods 0.08
67 TestFunctional/serial/CacheCmd/cache/add_remote 3.82
68 TestFunctional/serial/CacheCmd/cache/add_local 1.73
69 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
70 TestFunctional/serial/CacheCmd/cache/list 0.04
71 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
72 TestFunctional/serial/CacheCmd/cache/cache_reload 2.11
73 TestFunctional/serial/CacheCmd/cache/delete 0.08
74 TestFunctional/serial/MinikubeKubectlCmd 0.11
75 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
76 TestFunctional/serial/ExtraConfig 38.93
77 TestFunctional/serial/ComponentHealth 0.07
78 TestFunctional/serial/LogsCmd 1.58
79 TestFunctional/serial/LogsFileCmd 1.52
80 TestFunctional/serial/InvalidService 3.65
82 TestFunctional/parallel/ConfigCmd 0.29
83 TestFunctional/parallel/DashboardCmd 18.25
84 TestFunctional/parallel/DryRun 0.28
85 TestFunctional/parallel/InternationalLanguage 0.14
86 TestFunctional/parallel/StatusCmd 0.89
90 TestFunctional/parallel/ServiceCmdConnect 12.51
91 TestFunctional/parallel/AddonsCmd 0.11
92 TestFunctional/parallel/PersistentVolumeClaim 45.26
94 TestFunctional/parallel/SSHCmd 0.43
95 TestFunctional/parallel/CpCmd 0.89
96 TestFunctional/parallel/MySQL 31.39
97 TestFunctional/parallel/FileSync 0.22
98 TestFunctional/parallel/CertSync 1.43
102 TestFunctional/parallel/NodeLabels 0.07
104 TestFunctional/parallel/NonActiveRuntimeDisabled 0.43
106 TestFunctional/parallel/License 0.16
116 TestFunctional/parallel/Version/short 0.04
117 TestFunctional/parallel/Version/components 0.7
118 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
119 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
120 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
121 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
122 TestFunctional/parallel/ImageCommands/ImageBuild 3.82
123 TestFunctional/parallel/ImageCommands/Setup 4.62
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.09
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 5.73
126 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
127 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
128 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
129 TestFunctional/parallel/MountCmd/any-port 11.06
130 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.22
131 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.86
132 TestFunctional/parallel/ImageCommands/ImageRemove 0.61
133 TestFunctional/parallel/MountCmd/specific-port 1.7
134 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.57
135 TestFunctional/parallel/MountCmd/VerifyCleanup 1.74
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.92
137 TestFunctional/parallel/ServiceCmd/DeployApp 8.2
138 TestFunctional/parallel/ProfileCmd/profile_not_create 0.31
139 TestFunctional/parallel/ProfileCmd/profile_list 0.29
140 TestFunctional/parallel/ProfileCmd/profile_json_output 0.29
141 TestFunctional/parallel/ServiceCmd/List 1.25
142 TestFunctional/parallel/ServiceCmd/JSONOutput 1.39
143 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
144 TestFunctional/parallel/ServiceCmd/Format 0.34
145 TestFunctional/parallel/ServiceCmd/URL 0.36
146 TestFunctional/delete_addon-resizer_images 0.07
147 TestFunctional/delete_my-image_image 0.02
148 TestFunctional/delete_minikube_cached_images 0.02
152 TestIngressAddonLegacy/StartLegacyK8sCluster 83.16
154 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.02
155 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.63
156 TestIngressAddonLegacy/serial/ValidateIngressAddons 38.53
159 TestJSONOutput/start/Command 117.44
160 TestJSONOutput/start/Audit 0
162 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
163 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
165 TestJSONOutput/pause/Command 0.64
166 TestJSONOutput/pause/Audit 0
168 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
171 TestJSONOutput/unpause/Command 0.62
172 TestJSONOutput/unpause/Audit 0
174 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
177 TestJSONOutput/stop/Command 7.1
178 TestJSONOutput/stop/Audit 0
180 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
182 TestErrorJSONOutput 0.19
187 TestMainNoArgs 0.04
188 TestMinikubeProfile 135.13
191 TestMountStart/serial/StartWithMountFirst 28.72
192 TestMountStart/serial/VerifyMountFirst 0.38
193 TestMountStart/serial/StartWithMountSecond 28.54
194 TestMountStart/serial/VerifyMountSecond 0.38
195 TestMountStart/serial/DeleteFirst 0.67
196 TestMountStart/serial/VerifyMountPostDelete 0.39
197 TestMountStart/serial/Stop 1.16
198 TestMountStart/serial/RestartStopped 22.35
199 TestMountStart/serial/VerifyMountPostStop 0.38
202 TestMultiNode/serial/FreshStart2Nodes 128.57
203 TestMultiNode/serial/DeployApp2Nodes 4.12
204 TestMultiNode/serial/PingHostFrom2Pods 0.84
205 TestMultiNode/serial/AddNode 42.69
206 TestMultiNode/serial/ProfileList 0.23
207 TestMultiNode/serial/CopyFile 7.88
208 TestMultiNode/serial/StopNode 2.33
209 TestMultiNode/serial/StartAfterStop 29.66
210 TestMultiNode/serial/RestartKeepsNodes 316.82
211 TestMultiNode/serial/DeleteNode 1.76
212 TestMultiNode/serial/StopMultiNode 183.45
213 TestMultiNode/serial/RestartMultiNode 93.82
214 TestMultiNode/serial/ValidateNameConflict 68.75
219 TestPreload 332.61
221 TestScheduledStopUnix 134.44
225 TestRunningBinaryUpgrade 216.24
227 TestKubernetesUpgrade 209.93
230 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
231 TestNoKubernetes/serial/StartWithK8s 122.44
232 TestNoKubernetes/serial/StartWithStopK8s 56.52
233 TestNoKubernetes/serial/Start 31.58
234 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
235 TestNoKubernetes/serial/ProfileList 2.94
236 TestNoKubernetes/serial/Stop 1.31
237 TestNoKubernetes/serial/StartNoArgs 40.92
245 TestNetworkPlugins/group/false 2.88
250 TestPause/serial/Start 110.24
251 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
259 TestStoppedBinaryUpgrade/Setup 0.62
260 TestStoppedBinaryUpgrade/Upgrade 148.96
261 TestPause/serial/SecondStartNoReconfiguration 8.4
262 TestPause/serial/Pause 0.79
263 TestPause/serial/VerifyStatus 0.29
264 TestPause/serial/Unpause 0.79
265 TestPause/serial/PauseAgain 0.94
266 TestPause/serial/DeletePaused 1.25
267 TestPause/serial/VerifyDeletedResources 19.66
268 TestNetworkPlugins/group/auto/Start 159.62
269 TestStoppedBinaryUpgrade/MinikubeLogs 1.95
270 TestNetworkPlugins/group/kindnet/Start 100.79
271 TestNetworkPlugins/group/calico/Start 145.12
272 TestNetworkPlugins/group/custom-flannel/Start 149.64
273 TestNetworkPlugins/group/auto/KubeletFlags 0.22
274 TestNetworkPlugins/group/auto/NetCatPod 12.34
275 TestNetworkPlugins/group/kindnet/ControllerPod 5.04
276 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
277 TestNetworkPlugins/group/kindnet/NetCatPod 10.83
278 TestNetworkPlugins/group/auto/DNS 0.22
279 TestNetworkPlugins/group/auto/Localhost 0.24
280 TestNetworkPlugins/group/auto/HairPin 0.19
281 TestNetworkPlugins/group/kindnet/DNS 0.22
282 TestNetworkPlugins/group/kindnet/Localhost 0.2
283 TestNetworkPlugins/group/kindnet/HairPin 0.16
284 TestNetworkPlugins/group/enable-default-cni/Start 126.6
285 TestNetworkPlugins/group/flannel/Start 119.47
286 TestNetworkPlugins/group/calico/ControllerPod 5.03
287 TestNetworkPlugins/group/calico/KubeletFlags 0.26
288 TestNetworkPlugins/group/calico/NetCatPod 15.6
289 TestNetworkPlugins/group/calico/DNS 0.2
290 TestNetworkPlugins/group/calico/Localhost 0.14
291 TestNetworkPlugins/group/calico/HairPin 0.16
292 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
293 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.53
294 TestNetworkPlugins/group/bridge/Start 87.97
295 TestNetworkPlugins/group/custom-flannel/DNS 0.22
296 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
297 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
299 TestStartStop/group/old-k8s-version/serial/FirstStart 135.98
300 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.26
301 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.91
302 TestNetworkPlugins/group/flannel/ControllerPod 5.03
303 TestNetworkPlugins/group/flannel/KubeletFlags 0.25
304 TestNetworkPlugins/group/flannel/NetCatPod 11.34
305 TestNetworkPlugins/group/enable-default-cni/DNS 0.24
306 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
307 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
308 TestNetworkPlugins/group/flannel/DNS 0.22
309 TestNetworkPlugins/group/flannel/Localhost 0.18
310 TestNetworkPlugins/group/flannel/HairPin 0.27
311 TestNetworkPlugins/group/bridge/KubeletFlags 0.67
312 TestNetworkPlugins/group/bridge/NetCatPod 12.45
314 TestStartStop/group/no-preload/serial/FirstStart 87.92
315 TestNetworkPlugins/group/bridge/DNS 0.25
317 TestStartStop/group/embed-certs/serial/FirstStart 144.74
318 TestNetworkPlugins/group/bridge/Localhost 0.18
319 TestNetworkPlugins/group/bridge/HairPin 0.18
321 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 117.52
322 TestStartStop/group/old-k8s-version/serial/DeployApp 8.67
323 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.2
324 TestStartStop/group/old-k8s-version/serial/Stop 91.8
325 TestStartStop/group/no-preload/serial/DeployApp 8.03
326 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.33
327 TestStartStop/group/no-preload/serial/Stop 92.32
328 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.45
329 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.32
330 TestStartStop/group/default-k8s-diff-port/serial/Stop 92.13
331 TestStartStop/group/embed-certs/serial/DeployApp 8.53
332 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.33
333 TestStartStop/group/embed-certs/serial/Stop 92.29
334 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
335 TestStartStop/group/old-k8s-version/serial/SecondStart 469.79
336 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
337 TestStartStop/group/no-preload/serial/SecondStart 318.57
338 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
339 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 317.46
340 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
341 TestStartStop/group/embed-certs/serial/SecondStart 356.97
342 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.02
343 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
344 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
345 TestStartStop/group/no-preload/serial/Pause 2.68
347 TestStartStop/group/newest-cni/serial/FirstStart 91.03
348 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.04
349 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.21
350 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.33
351 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.73
352 TestStartStop/group/newest-cni/serial/DeployApp 0
353 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.91
354 TestStartStop/group/newest-cni/serial/Stop 2.27
355 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 19.03
356 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
357 TestStartStop/group/newest-cni/serial/SecondStart 51.72
358 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.14
359 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.29
360 TestStartStop/group/embed-certs/serial/Pause 3.07
361 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.03
362 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
363 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
364 TestStartStop/group/old-k8s-version/serial/Pause 2.71
365 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
366 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
367 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
368 TestStartStop/group/newest-cni/serial/Pause 2.51
x
+
TestDownloadOnly/v1.16.0/json-events (43.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-973200 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-973200 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (43.158586527s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (43.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-973200
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-973200: exit status 85 (68.482173ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-973200 | jenkins | v1.31.2 | 05 Oct 23 20:02 UTC |          |
	|         | -p download-only-973200        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/05 20:02:55
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1005 20:02:55.768623  204015 out.go:296] Setting OutFile to fd 1 ...
	I1005 20:02:55.768721  204015 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:02:55.768729  204015 out.go:309] Setting ErrFile to fd 2...
	I1005 20:02:55.768733  204015 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:02:55.768937  204015 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-196818/.minikube/bin
	W1005 20:02:55.769042  204015 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17363-196818/.minikube/config/config.json: open /home/jenkins/minikube-integration/17363-196818/.minikube/config/config.json: no such file or directory
	I1005 20:02:55.769670  204015 out.go:303] Setting JSON to true
	I1005 20:02:55.770553  204015 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":20728,"bootTime":1696515448,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1005 20:02:55.770628  204015 start.go:138] virtualization: kvm guest
	I1005 20:02:55.773310  204015 out.go:97] [download-only-973200] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1005 20:02:55.775136  204015 out.go:169] MINIKUBE_LOCATION=17363
	W1005 20:02:55.773436  204015 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17363-196818/.minikube/cache/preloaded-tarball: no such file or directory
	I1005 20:02:55.773454  204015 notify.go:220] Checking for updates...
	I1005 20:02:55.777935  204015 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 20:02:55.779647  204015 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17363-196818/kubeconfig
	I1005 20:02:55.781109  204015 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-196818/.minikube
	I1005 20:02:55.782480  204015 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1005 20:02:55.785395  204015 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1005 20:02:55.785691  204015 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 20:02:55.822405  204015 out.go:97] Using the kvm2 driver based on user configuration
	I1005 20:02:55.822431  204015 start.go:298] selected driver: kvm2
	I1005 20:02:55.822437  204015 start.go:902] validating driver "kvm2" against <nil>
	I1005 20:02:55.822832  204015 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 20:02:55.822920  204015 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17363-196818/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1005 20:02:55.838438  204015 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1005 20:02:55.838502  204015 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1005 20:02:55.838984  204015 start_flags.go:384] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1005 20:02:55.839132  204015 start_flags.go:905] Wait components to verify : map[apiserver:true system_pods:true]
	I1005 20:02:55.839191  204015 cni.go:84] Creating CNI manager for ""
	I1005 20:02:55.839206  204015 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1005 20:02:55.839217  204015 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1005 20:02:55.839226  204015 start_flags.go:321] config:
	{Name:download-only-973200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-973200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 20:02:55.839475  204015 iso.go:125] acquiring lock: {Name:mk57851d2f6689e37478de1afefefb6b4948072d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 20:02:55.841727  204015 out.go:97] Downloading VM boot image ...
	I1005 20:02:55.841758  204015 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17363-196818/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I1005 20:03:32.188137  204015 out.go:97] Starting control plane node download-only-973200 in cluster download-only-973200
	I1005 20:03:32.188178  204015 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1005 20:03:32.220210  204015 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I1005 20:03:32.220256  204015 cache.go:57] Caching tarball of preloaded images
	I1005 20:03:32.220460  204015 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1005 20:03:32.223120  204015 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1005 20:03:32.223166  204015 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I1005 20:03:32.268109  204015 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/17363-196818/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-973200"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/json-events (5.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-973200 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-973200 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (5.631762323s)
--- PASS: TestDownloadOnly/v1.28.2/json-events (5.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/preload-exists
--- PASS: TestDownloadOnly/v1.28.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-973200
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-973200: exit status 85 (64.770754ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-973200 | jenkins | v1.31.2 | 05 Oct 23 20:02 UTC |          |
	|         | -p download-only-973200        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-973200 | jenkins | v1.31.2 | 05 Oct 23 20:03 UTC |          |
	|         | -p download-only-973200        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/05 20:03:39
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1005 20:03:39.003961  204148 out.go:296] Setting OutFile to fd 1 ...
	I1005 20:03:39.004110  204148 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:03:39.004124  204148 out.go:309] Setting ErrFile to fd 2...
	I1005 20:03:39.004132  204148 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:03:39.004325  204148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-196818/.minikube/bin
	W1005 20:03:39.004444  204148 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17363-196818/.minikube/config/config.json: open /home/jenkins/minikube-integration/17363-196818/.minikube/config/config.json: no such file or directory
	I1005 20:03:39.004909  204148 out.go:303] Setting JSON to true
	I1005 20:03:39.005801  204148 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":20771,"bootTime":1696515448,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1005 20:03:39.005870  204148 start.go:138] virtualization: kvm guest
	I1005 20:03:39.008892  204148 out.go:97] [download-only-973200] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1005 20:03:39.011255  204148 out.go:169] MINIKUBE_LOCATION=17363
	I1005 20:03:39.009167  204148 notify.go:220] Checking for updates...
	I1005 20:03:39.015397  204148 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 20:03:39.017640  204148 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17363-196818/kubeconfig
	I1005 20:03:39.019812  204148 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-196818/.minikube
	I1005 20:03:39.022470  204148 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1005 20:03:39.026663  204148 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1005 20:03:39.027235  204148 config.go:182] Loaded profile config "download-only-973200": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W1005 20:03:39.027296  204148 start.go:810] api.Load failed for download-only-973200: filestore "download-only-973200": Docker machine "download-only-973200" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1005 20:03:39.027416  204148 driver.go:378] Setting default libvirt URI to qemu:///system
	W1005 20:03:39.027456  204148 start.go:810] api.Load failed for download-only-973200: filestore "download-only-973200": Docker machine "download-only-973200" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1005 20:03:39.063920  204148 out.go:97] Using the kvm2 driver based on existing profile
	I1005 20:03:39.063964  204148 start.go:298] selected driver: kvm2
	I1005 20:03:39.063971  204148 start.go:902] validating driver "kvm2" against &{Name:download-only-973200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-973200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 20:03:39.064442  204148 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 20:03:39.064560  204148 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17363-196818/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1005 20:03:39.081857  204148 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1005 20:03:39.082714  204148 cni.go:84] Creating CNI manager for ""
	I1005 20:03:39.082741  204148 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1005 20:03:39.082755  204148 start_flags.go:321] config:
	{Name:download-only-973200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:download-only-973200 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 20:03:39.082913  204148 iso.go:125] acquiring lock: {Name:mk57851d2f6689e37478de1afefefb6b4948072d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 20:03:39.085334  204148 out.go:97] Starting control plane node download-only-973200 in cluster download-only-973200
	I1005 20:03:39.085364  204148 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime containerd
	I1005 20:03:39.115525  204148 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-containerd-overlay2-amd64.tar.lz4
	I1005 20:03:39.115592  204148 cache.go:57] Caching tarball of preloaded images
	I1005 20:03:39.115837  204148 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime containerd
	I1005 20:03:39.118182  204148 out.go:97] Downloading Kubernetes v1.28.2 preload ...
	I1005 20:03:39.118238  204148 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.2-containerd-overlay2-amd64.tar.lz4 ...
	I1005 20:03:39.172097  204148 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:ae58936c147f05f34778878c23d3887a -> /home/jenkins/minikube-integration/17363-196818/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-containerd-overlay2-amd64.tar.lz4
	I1005 20:03:42.926573  204148 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.2-containerd-overlay2-amd64.tar.lz4 ...
	I1005 20:03:42.926671  204148 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17363-196818/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-containerd-overlay2-amd64.tar.lz4 ...
	I1005 20:03:43.860478  204148 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on containerd
	I1005 20:03:43.860632  204148 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/download-only-973200/config.json ...
	I1005 20:03:43.860838  204148 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime containerd
	I1005 20:03:43.861049  204148 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17363-196818/.minikube/cache/linux/amd64/v1.28.2/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-973200"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-973200
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-677315 --alsologtostderr --binary-mirror http://127.0.0.1:33855 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-677315" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-677315
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (109.94s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-391711 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-391711 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m48.448670454s)
helpers_test.go:175: Cleaning up "offline-containerd-391711" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-391711
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-391711: (1.486823879s)
--- PASS: TestOffline (109.94s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:926: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-127532
addons_test.go:926: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-127532: exit status 85 (50.465248ms)

                                                
                                                
-- stdout --
	* Profile "addons-127532" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-127532"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:937: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-127532
addons_test.go:937: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-127532: exit status 85 (50.920614ms)

                                                
                                                
-- stdout --
	* Profile "addons-127532" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-127532"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (163.57s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-127532 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-127532 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m43.566495659s)
--- PASS: TestAddons/Setup (163.57s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.37s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 22.035167ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-88fwl" [59c90c75-e8df-40f9-9c0f-ef6b6e5f7c48] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.028680228s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-rv8kw" [0faef84e-a6c6-4c66-972e-567277a6613b] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.02692715s
addons_test.go:338: (dbg) Run:  kubectl --context addons-127532 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-127532 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Done: kubectl --context addons-127532 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.308819838s)
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p addons-127532 ip
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-127532 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.37s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (23.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-127532 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-127532 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-127532 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [aa0334db-e03f-49db-b4a5-2a69ab0feb5d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [aa0334db-e03f-49db-b4a5-2a69ab0feb5d] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.025849734s
addons_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p addons-127532 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-127532 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p addons-127532 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.39.191
addons_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p addons-127532 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-amd64 -p addons-127532 addons disable ingress-dns --alsologtostderr -v=1: (1.925074785s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p addons-127532 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-amd64 -p addons-127532 addons disable ingress --alsologtostderr -v=1: (7.912827114s)
--- PASS: TestAddons/parallel/Ingress (23.03s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.03s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:836: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-hkzrd" [c03b4390-888d-49ae-b6db-91d516004837] Running
addons_test.go:836: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.01275889s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-127532
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-127532: (6.016260298s)
--- PASS: TestAddons/parallel/InspektorGadget (11.03s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 21.823667ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-p846v" [87d58e97-7364-4431-9e84-80f4247aa856] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.023318019s
addons_test.go:413: (dbg) Run:  kubectl --context addons-127532 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p addons-127532 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p addons-127532 addons disable metrics-server --alsologtostderr -v=1: (1.135654927s)
--- PASS: TestAddons/parallel/MetricsServer (6.27s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.79s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:454: tiller-deploy stabilized in 22.828306ms
addons_test.go:456: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-wpjtp" [a40dafcd-da9a-46c6-931f-65e21917673c] Running
addons_test.go:456: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.033127865s
addons_test.go:471: (dbg) Run:  kubectl --context addons-127532 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:471: (dbg) Done: kubectl --context addons-127532 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.452180623s)
addons_test.go:488: (dbg) Run:  out/minikube-linux-amd64 -p addons-127532 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:488: (dbg) Done: out/minikube-linux-amd64 -p addons-127532 addons disable helm-tiller --alsologtostderr -v=1: (1.281571298s)
--- PASS: TestAddons/parallel/HelmTiller (12.79s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.83s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:559: csi-hostpath-driver pods stabilized in 12.704291ms
addons_test.go:562: (dbg) Run:  kubectl --context addons-127532 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127532 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127532 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127532 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127532 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127532 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127532 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127532 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127532 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127532 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127532 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127532 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127532 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127532 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127532 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127532 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127532 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127532 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127532 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127532 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:572: (dbg) Run:  kubectl --context addons-127532 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [64d37d8b-942f-400c-aee2-53661c4b930e] Pending
helpers_test.go:344: "task-pv-pod" [64d37d8b-942f-400c-aee2-53661c4b930e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [64d37d8b-942f-400c-aee2-53661c4b930e] Running
addons_test.go:577: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.047551494s
addons_test.go:582: (dbg) Run:  kubectl --context addons-127532 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-127532 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-127532 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-127532 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-127532 delete pod task-pv-pod
addons_test.go:592: (dbg) Done: kubectl --context addons-127532 delete pod task-pv-pod: (1.032404164s)
addons_test.go:598: (dbg) Run:  kubectl --context addons-127532 delete pvc hpvc
addons_test.go:604: (dbg) Run:  kubectl --context addons-127532 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127532 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127532 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127532 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:614: (dbg) Run:  kubectl --context addons-127532 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:619: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [125a54c2-3836-4604-8f60-80ef09f74ae7] Pending
helpers_test.go:344: "task-pv-pod-restore" [125a54c2-3836-4604-8f60-80ef09f74ae7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [125a54c2-3836-4604-8f60-80ef09f74ae7] Running
addons_test.go:619: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.023554135s
addons_test.go:624: (dbg) Run:  kubectl --context addons-127532 delete pod task-pv-pod-restore
addons_test.go:624: (dbg) Done: kubectl --context addons-127532 delete pod task-pv-pod-restore: (1.278836189s)
addons_test.go:628: (dbg) Run:  kubectl --context addons-127532 delete pvc hpvc-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-127532 delete volumesnapshot new-snapshot-demo
addons_test.go:636: (dbg) Run:  out/minikube-linux-amd64 -p addons-127532 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:636: (dbg) Done: out/minikube-linux-amd64 -p addons-127532 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.81331166s)
addons_test.go:640: (dbg) Run:  out/minikube-linux-amd64 -p addons-127532 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (53.83s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.92s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:855: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7d49f968d9-pfl6g" [1b6b1557-289f-4ad0-af09-514681a85adf] Running
addons_test.go:855: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.016910421s
addons_test.go:858: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-127532
--- PASS: TestAddons/parallel/CloudSpanner (5.92s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.05s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:871: (dbg) Run:  kubectl --context addons-127532 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:877: (dbg) Run:  kubectl --context addons-127532 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:881: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127532 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-127532 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:884: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [16cd4cae-7260-4762-a7e0-3797abef56ec] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [16cd4cae-7260-4762-a7e0-3797abef56ec] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [16cd4cae-7260-4762-a7e0-3797abef56ec] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:884: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.032626175s
addons_test.go:889: (dbg) Run:  kubectl --context addons-127532 get pvc test-pvc -o=json
addons_test.go:898: (dbg) Run:  out/minikube-linux-amd64 -p addons-127532 ssh "cat /opt/local-path-provisioner/pvc-dc50f883-7edc-4880-b5eb-ddfd177dce7a_default_test-pvc/file1"
addons_test.go:910: (dbg) Run:  kubectl --context addons-127532 delete pod test-local-path
addons_test.go:914: (dbg) Run:  kubectl --context addons-127532 delete pvc test-pvc
addons_test.go:918: (dbg) Run:  out/minikube-linux-amd64 -p addons-127532 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:918: (dbg) Done: out/minikube-linux-amd64 -p addons-127532 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (44.230867529s)
--- PASS: TestAddons/parallel/LocalPath (57.05s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:648: (dbg) Run:  kubectl --context addons-127532 create ns new-namespace
addons_test.go:662: (dbg) Run:  kubectl --context addons-127532 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.53s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-127532
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-127532: (1m32.269872197s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-127532
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-127532
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-127532
--- PASS: TestAddons/StoppedEnableDisable (92.53s)

                                                
                                    
x
+
TestCertOptions (101.58s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-481341 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-481341 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m38.9994679s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-481341 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-481341 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-481341 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-481341" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-481341
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-481341: (2.04604071s)
--- PASS: TestCertOptions (101.58s)

                                                
                                    
x
+
TestCertExpiration (330.38s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-285915 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-285915 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m34.950414997s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-285915 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
E1005 20:51:28.352265  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/functional-604028/client.crt: no such file or directory
E1005 20:51:29.276461  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-285915 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (53.890896515s)
helpers_test.go:175: Cleaning up "cert-expiration-285915" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-285915
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-285915: (1.538149635s)
--- PASS: TestCertExpiration (330.38s)

                                                
                                    
x
+
TestForceSystemdFlag (110.95s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-973312 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E1005 20:46:29.276017  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-973312 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m49.489285945s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-973312 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-973312" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-973312
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-973312: (1.200553443s)
--- PASS: TestForceSystemdFlag (110.95s)

                                                
                                    
x
+
TestForceSystemdEnv (69.74s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-437343 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-437343 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m8.369490515s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-437343 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-437343" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-437343
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-437343: (1.174391398s)
--- PASS: TestForceSystemdEnv (69.74s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.21s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
E1005 20:48:25.305788  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/functional-604028/client.crt: no such file or directory
--- PASS: TestKVMDriverInstallOrUpdate (3.21s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-363911 --log_dir /tmp/nospam-363911 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-363911 --log_dir /tmp/nospam-363911 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-363911 --log_dir /tmp/nospam-363911 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-363911 --log_dir /tmp/nospam-363911 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-363911 --log_dir /tmp/nospam-363911 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-363911 --log_dir /tmp/nospam-363911 status
--- PASS: TestErrorSpam/status (0.74s)

                                                
                                    
x
+
TestErrorSpam/pause (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-363911 --log_dir /tmp/nospam-363911 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-363911 --log_dir /tmp/nospam-363911 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-363911 --log_dir /tmp/nospam-363911 pause
--- PASS: TestErrorSpam/pause (1.51s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-363911 --log_dir /tmp/nospam-363911 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-363911 --log_dir /tmp/nospam-363911 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-363911 --log_dir /tmp/nospam-363911 unpause
--- PASS: TestErrorSpam/unpause (1.65s)

                                                
                                    
x
+
TestErrorSpam/stop (2.21s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-363911 --log_dir /tmp/nospam-363911 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-363911 --log_dir /tmp/nospam-363911 stop: (2.075582339s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-363911 --log_dir /tmp/nospam-363911 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-363911 --log_dir /tmp/nospam-363911 stop
--- PASS: TestErrorSpam/stop (2.21s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17363-196818/.minikube/files/etc/test/nested/copy/204004/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (75.79s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-604028 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E1005 20:11:29.276190  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/client.crt: no such file or directory
E1005 20:11:29.282186  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/client.crt: no such file or directory
E1005 20:11:29.292386  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/client.crt: no such file or directory
E1005 20:11:29.312735  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/client.crt: no such file or directory
E1005 20:11:29.353105  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/client.crt: no such file or directory
E1005 20:11:29.433518  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/client.crt: no such file or directory
E1005 20:11:29.593994  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/client.crt: no such file or directory
E1005 20:11:29.914599  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/client.crt: no such file or directory
E1005 20:11:30.555593  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/client.crt: no such file or directory
E1005 20:11:31.835922  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/client.crt: no such file or directory
E1005 20:11:34.397008  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/client.crt: no such file or directory
E1005 20:11:39.517917  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/client.crt: no such file or directory
E1005 20:11:49.759039  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-604028 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m15.787886313s)
--- PASS: TestFunctional/serial/StartWithProxy (75.79s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.72s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-604028 --alsologtostderr -v=8
E1005 20:12:10.239943  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-604028 --alsologtostderr -v=8: (39.718604632s)
functional_test.go:659: soft start took 39.719357321s for "functional-604028" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.72s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-604028 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.82s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-604028 cache add registry.k8s.io/pause:3.1: (1.213760524s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-604028 cache add registry.k8s.io/pause:3.3: (1.325004109s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-604028 cache add registry.k8s.io/pause:latest: (1.280722903s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.82s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-604028 /tmp/TestFunctionalserialCacheCmdcacheadd_local3814740188/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 cache add minikube-local-cache-test:functional-604028
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-604028 cache add minikube-local-cache-test:functional-604028: (1.396750608s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 cache delete minikube-local-cache-test:functional-604028
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-604028
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-604028 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (220.300689ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-604028 cache reload: (1.409489909s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 kubectl -- --context functional-604028 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-604028 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.93s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-604028 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1005 20:12:51.202040  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-604028 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.934130328s)
functional_test.go:757: restart took 38.934302411s for "functional-604028" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.93s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-604028 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-604028 logs: (1.582771382s)
--- PASS: TestFunctional/serial/LogsCmd (1.58s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 logs --file /tmp/TestFunctionalserialLogsFileCmd837353310/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-604028 logs --file /tmp/TestFunctionalserialLogsFileCmd837353310/001/logs.txt: (1.515064191s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.65s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-604028 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-604028
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-604028: exit status 115 (289.536199ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.38:30267 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-604028 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.65s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-604028 config get cpus: exit status 14 (45.668125ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-604028 config get cpus: exit status 14 (53.165744ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (18.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-604028 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-604028 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 209781: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (18.25s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-604028 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-604028 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (146.721934ms)

                                                
                                                
-- stdout --
	* [functional-604028] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17363-196818/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-196818/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1005 20:13:25.845154  209472 out.go:296] Setting OutFile to fd 1 ...
	I1005 20:13:25.845424  209472 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:13:25.845434  209472 out.go:309] Setting ErrFile to fd 2...
	I1005 20:13:25.845440  209472 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:13:25.845623  209472 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-196818/.minikube/bin
	I1005 20:13:25.846158  209472 out.go:303] Setting JSON to false
	I1005 20:13:25.847085  209472 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":21358,"bootTime":1696515448,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1005 20:13:25.847151  209472 start.go:138] virtualization: kvm guest
	I1005 20:13:25.849255  209472 out.go:177] * [functional-604028] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1005 20:13:25.850713  209472 out.go:177]   - MINIKUBE_LOCATION=17363
	I1005 20:13:25.850771  209472 notify.go:220] Checking for updates...
	I1005 20:13:25.852154  209472 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 20:13:25.853462  209472 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17363-196818/kubeconfig
	I1005 20:13:25.854834  209472 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-196818/.minikube
	I1005 20:13:25.856040  209472 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1005 20:13:25.857197  209472 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1005 20:13:25.858923  209472 config.go:182] Loaded profile config "functional-604028": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
	I1005 20:13:25.859579  209472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:13:25.859647  209472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:13:25.874900  209472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39183
	I1005 20:13:25.875386  209472 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:13:25.875979  209472 main.go:141] libmachine: Using API Version  1
	I1005 20:13:25.876000  209472 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:13:25.876377  209472 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:13:25.876589  209472 main.go:141] libmachine: (functional-604028) Calling .DriverName
	I1005 20:13:25.876842  209472 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 20:13:25.877238  209472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:13:25.877295  209472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:13:25.895556  209472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42823
	I1005 20:13:25.896009  209472 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:13:25.896536  209472 main.go:141] libmachine: Using API Version  1
	I1005 20:13:25.896557  209472 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:13:25.896946  209472 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:13:25.897149  209472 main.go:141] libmachine: (functional-604028) Calling .DriverName
	I1005 20:13:25.933654  209472 out.go:177] * Using the kvm2 driver based on existing profile
	I1005 20:13:25.935065  209472 start.go:298] selected driver: kvm2
	I1005 20:13:25.935080  209472 start.go:902] validating driver "kvm2" against &{Name:functional-604028 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:functional-604028 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.38 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 20:13:25.935234  209472 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1005 20:13:25.937507  209472 out.go:177] 
	W1005 20:13:25.938901  209472 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1005 20:13:25.940101  209472 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-604028 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-604028 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-604028 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (137.824499ms)

                                                
                                                
-- stdout --
	* [functional-604028] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17363-196818/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-196818/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1005 20:13:25.702061  209411 out.go:296] Setting OutFile to fd 1 ...
	I1005 20:13:25.702218  209411 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:13:25.702256  209411 out.go:309] Setting ErrFile to fd 2...
	I1005 20:13:25.702269  209411 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:13:25.702570  209411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-196818/.minikube/bin
	I1005 20:13:25.703117  209411 out.go:303] Setting JSON to false
	I1005 20:13:25.704122  209411 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":21358,"bootTime":1696515448,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1005 20:13:25.704204  209411 start.go:138] virtualization: kvm guest
	I1005 20:13:25.706696  209411 out.go:177] * [functional-604028] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	I1005 20:13:25.708151  209411 out.go:177]   - MINIKUBE_LOCATION=17363
	I1005 20:13:25.708222  209411 notify.go:220] Checking for updates...
	I1005 20:13:25.709607  209411 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 20:13:25.711343  209411 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17363-196818/kubeconfig
	I1005 20:13:25.712636  209411 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-196818/.minikube
	I1005 20:13:25.713815  209411 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1005 20:13:25.714986  209411 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1005 20:13:25.716429  209411 config.go:182] Loaded profile config "functional-604028": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
	I1005 20:13:25.716836  209411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:13:25.716888  209411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:13:25.733959  209411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41795
	I1005 20:13:25.734434  209411 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:13:25.735032  209411 main.go:141] libmachine: Using API Version  1
	I1005 20:13:25.735054  209411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:13:25.735446  209411 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:13:25.735627  209411 main.go:141] libmachine: (functional-604028) Calling .DriverName
	I1005 20:13:25.735907  209411 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 20:13:25.736245  209411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:13:25.736273  209411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:13:25.750511  209411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33925
	I1005 20:13:25.750939  209411 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:13:25.751409  209411 main.go:141] libmachine: Using API Version  1
	I1005 20:13:25.751438  209411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:13:25.751808  209411 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:13:25.751994  209411 main.go:141] libmachine: (functional-604028) Calling .DriverName
	I1005 20:13:25.786654  209411 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1005 20:13:25.787928  209411 start.go:298] selected driver: kvm2
	I1005 20:13:25.787959  209411 start.go:902] validating driver "kvm2" against &{Name:functional-604028 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:functional-604028 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.38 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 20:13:25.788100  209411 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1005 20:13:25.790076  209411 out.go:177] 
	W1005 20:13:25.791200  209411 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1005 20:13:25.792404  209411 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-604028 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-604028 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-qb7dx" [cade7e0d-1d72-4f78-a15f-ccb1873c4706] Pending
helpers_test.go:344: "hello-node-connect-55497b8b78-qb7dx" [cade7e0d-1d72-4f78-a15f-ccb1873c4706] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-qb7dx" [cade7e0d-1d72-4f78-a15f-ccb1873c4706] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.018060793s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.39.38:30450
functional_test.go:1674: http://192.168.39.38:30450: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-qb7dx

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.38:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.38:30450
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.51s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (45.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [1b097671-5450-4edd-b80f-48c1828d5325] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.025314187s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-604028 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-604028 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-604028 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-604028 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-604028 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7b7f902d-457e-4f5d-8582-2b1737537a72] Pending
helpers_test.go:344: "sp-pod" [7b7f902d-457e-4f5d-8582-2b1737537a72] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [7b7f902d-457e-4f5d-8582-2b1737537a72] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 19.027378519s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-604028 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-604028 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-604028 delete -f testdata/storage-provisioner/pod.yaml: (1.484146926s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-604028 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d948f02d-7eed-4bb9-8d6b-eb9fc807dc1e] Pending
helpers_test.go:344: "sp-pod" [d948f02d-7eed-4bb9-8d6b-eb9fc807dc1e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d948f02d-7eed-4bb9-8d6b-eb9fc807dc1e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.02044899s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-604028 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (45.26s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 ssh -n functional-604028 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 cp functional-604028:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1341944732/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 ssh -n functional-604028 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (31.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-604028 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-nf8np" [034b96dc-750a-41a1-a61c-361b890d2a68] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-nf8np" [034b96dc-750a-41a1-a61c-361b890d2a68] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 26.025885162s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-604028 exec mysql-859648c796-nf8np -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-604028 exec mysql-859648c796-nf8np -- mysql -ppassword -e "show databases;": exit status 1 (219.496981ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-604028 exec mysql-859648c796-nf8np -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-604028 exec mysql-859648c796-nf8np -- mysql -ppassword -e "show databases;": exit status 1 (163.525195ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1005 20:14:13.122373  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/client.crt: no such file or directory
functional_test.go:1803: (dbg) Run:  kubectl --context functional-604028 exec mysql-859648c796-nf8np -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-604028 exec mysql-859648c796-nf8np -- mysql -ppassword -e "show databases;": exit status 1 (147.691767ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-604028 exec mysql-859648c796-nf8np -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (31.39s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/204004/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 ssh "sudo cat /etc/test/nested/copy/204004/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/204004.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 ssh "sudo cat /etc/ssl/certs/204004.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/204004.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 ssh "sudo cat /usr/share/ca-certificates/204004.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/2040042.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 ssh "sudo cat /etc/ssl/certs/2040042.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/2040042.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 ssh "sudo cat /usr/share/ca-certificates/2040042.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-604028 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-604028 ssh "sudo systemctl is-active docker": exit status 1 (214.174159ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-604028 ssh "sudo systemctl is-active crio": exit status 1 (213.484859ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-604028 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-604028
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-604028
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-604028 image ls --format short --alsologtostderr:
I1005 20:13:57.656025  211420 out.go:296] Setting OutFile to fd 1 ...
I1005 20:13:57.656168  211420 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 20:13:57.656178  211420 out.go:309] Setting ErrFile to fd 2...
I1005 20:13:57.656183  211420 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 20:13:57.656431  211420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-196818/.minikube/bin
I1005 20:13:57.657055  211420 config.go:182] Loaded profile config "functional-604028": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
I1005 20:13:57.657159  211420 config.go:182] Loaded profile config "functional-604028": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
I1005 20:13:57.657503  211420 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1005 20:13:57.657558  211420 main.go:141] libmachine: Launching plugin server for driver kvm2
I1005 20:13:57.672373  211420 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41393
I1005 20:13:57.672863  211420 main.go:141] libmachine: () Calling .GetVersion
I1005 20:13:57.673553  211420 main.go:141] libmachine: Using API Version  1
I1005 20:13:57.673580  211420 main.go:141] libmachine: () Calling .SetConfigRaw
I1005 20:13:57.673953  211420 main.go:141] libmachine: () Calling .GetMachineName
I1005 20:13:57.674187  211420 main.go:141] libmachine: (functional-604028) Calling .GetState
I1005 20:13:57.676041  211420 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1005 20:13:57.676083  211420 main.go:141] libmachine: Launching plugin server for driver kvm2
I1005 20:13:57.691599  211420 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44995
I1005 20:13:57.692073  211420 main.go:141] libmachine: () Calling .GetVersion
I1005 20:13:57.692523  211420 main.go:141] libmachine: Using API Version  1
I1005 20:13:57.692545  211420 main.go:141] libmachine: () Calling .SetConfigRaw
I1005 20:13:57.692970  211420 main.go:141] libmachine: () Calling .GetMachineName
I1005 20:13:57.693205  211420 main.go:141] libmachine: (functional-604028) Calling .DriverName
I1005 20:13:57.693453  211420 ssh_runner.go:195] Run: systemctl --version
I1005 20:13:57.693485  211420 main.go:141] libmachine: (functional-604028) Calling .GetSSHHostname
I1005 20:13:57.696648  211420 main.go:141] libmachine: (functional-604028) DBG | domain functional-604028 has defined MAC address 52:54:00:f5:a0:dd in network mk-functional-604028
I1005 20:13:57.697062  211420 main.go:141] libmachine: (functional-604028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a0:dd", ip: ""} in network mk-functional-604028: {Iface:virbr1 ExpiryTime:2023-10-05 21:10:51 +0000 UTC Type:0 Mac:52:54:00:f5:a0:dd Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-604028 Clientid:01:52:54:00:f5:a0:dd}
I1005 20:13:57.697088  211420 main.go:141] libmachine: (functional-604028) DBG | domain functional-604028 has defined IP address 192.168.39.38 and MAC address 52:54:00:f5:a0:dd in network mk-functional-604028
I1005 20:13:57.697329  211420 main.go:141] libmachine: (functional-604028) Calling .GetSSHPort
I1005 20:13:57.697595  211420 main.go:141] libmachine: (functional-604028) Calling .GetSSHKeyPath
I1005 20:13:57.697769  211420 main.go:141] libmachine: (functional-604028) Calling .GetSSHUsername
I1005 20:13:57.697942  211420 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17363-196818/.minikube/machines/functional-604028/id_rsa Username:docker}
I1005 20:13:57.793079  211420 ssh_runner.go:195] Run: sudo crictl images --output json
I1005 20:13:57.854118  211420 main.go:141] libmachine: Making call to close driver server
I1005 20:13:57.854133  211420 main.go:141] libmachine: (functional-604028) Calling .Close
I1005 20:13:57.854457  211420 main.go:141] libmachine: Successfully made call to close driver server
I1005 20:13:57.854510  211420 main.go:141] libmachine: Making call to close connection to plugin binary
I1005 20:13:57.854527  211420 main.go:141] libmachine: Making call to close driver server
I1005 20:13:57.854538  211420 main.go:141] libmachine: (functional-604028) Calling .Close
I1005 20:13:57.854799  211420 main.go:141] libmachine: Successfully made call to close driver server
I1005 20:13:57.854820  211420 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-604028 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager     | v1.28.2            | sha256:55f13c | 33.4MB |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
| docker.io/kindest/kindnetd                  | v20230809-80a64d96 | sha256:c7d129 | 27.7MB |
| docker.io/library/nginx                     | latest             | sha256:61395b | 70.5MB |
| gcr.io/google-containers/addon-resizer      | functional-604028  | sha256:ffd4cf | 10.8MB |
| docker.io/library/minikube-local-cache-test | functional-604028  | sha256:d81db8 | 1.01kB |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/kube-scheduler              | v1.28.2            | sha256:7a5d9d | 18.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| localhost/my-image                          | functional-604028  | sha256:656039 | 775kB  |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:ead0a4 | 16.2MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:73deb9 | 103MB  |
| registry.k8s.io/kube-apiserver              | v1.28.2            | sha256:cdcab1 | 34.7MB |
| registry.k8s.io/kube-proxy                  | v1.28.2            | sha256:c120fe | 24.6MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-604028 image ls --format table --alsologtostderr:
I1005 20:14:02.251143  211578 out.go:296] Setting OutFile to fd 1 ...
I1005 20:14:02.251265  211578 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 20:14:02.251289  211578 out.go:309] Setting ErrFile to fd 2...
I1005 20:14:02.251297  211578 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 20:14:02.251501  211578 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-196818/.minikube/bin
I1005 20:14:02.252072  211578 config.go:182] Loaded profile config "functional-604028": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
I1005 20:14:02.252168  211578 config.go:182] Loaded profile config "functional-604028": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
I1005 20:14:02.252540  211578 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1005 20:14:02.252604  211578 main.go:141] libmachine: Launching plugin server for driver kvm2
I1005 20:14:02.267315  211578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38577
I1005 20:14:02.267856  211578 main.go:141] libmachine: () Calling .GetVersion
I1005 20:14:02.268421  211578 main.go:141] libmachine: Using API Version  1
I1005 20:14:02.268445  211578 main.go:141] libmachine: () Calling .SetConfigRaw
I1005 20:14:02.268807  211578 main.go:141] libmachine: () Calling .GetMachineName
I1005 20:14:02.268986  211578 main.go:141] libmachine: (functional-604028) Calling .GetState
I1005 20:14:02.271105  211578 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1005 20:14:02.271147  211578 main.go:141] libmachine: Launching plugin server for driver kvm2
I1005 20:14:02.287894  211578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34869
I1005 20:14:02.288297  211578 main.go:141] libmachine: () Calling .GetVersion
I1005 20:14:02.288741  211578 main.go:141] libmachine: Using API Version  1
I1005 20:14:02.288766  211578 main.go:141] libmachine: () Calling .SetConfigRaw
I1005 20:14:02.289137  211578 main.go:141] libmachine: () Calling .GetMachineName
I1005 20:14:02.289351  211578 main.go:141] libmachine: (functional-604028) Calling .DriverName
I1005 20:14:02.289652  211578 ssh_runner.go:195] Run: systemctl --version
I1005 20:14:02.289729  211578 main.go:141] libmachine: (functional-604028) Calling .GetSSHHostname
I1005 20:14:02.292980  211578 main.go:141] libmachine: (functional-604028) DBG | domain functional-604028 has defined MAC address 52:54:00:f5:a0:dd in network mk-functional-604028
I1005 20:14:02.293407  211578 main.go:141] libmachine: (functional-604028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a0:dd", ip: ""} in network mk-functional-604028: {Iface:virbr1 ExpiryTime:2023-10-05 21:10:51 +0000 UTC Type:0 Mac:52:54:00:f5:a0:dd Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-604028 Clientid:01:52:54:00:f5:a0:dd}
I1005 20:14:02.293440  211578 main.go:141] libmachine: (functional-604028) DBG | domain functional-604028 has defined IP address 192.168.39.38 and MAC address 52:54:00:f5:a0:dd in network mk-functional-604028
I1005 20:14:02.293560  211578 main.go:141] libmachine: (functional-604028) Calling .GetSSHPort
I1005 20:14:02.293725  211578 main.go:141] libmachine: (functional-604028) Calling .GetSSHKeyPath
I1005 20:14:02.293939  211578 main.go:141] libmachine: (functional-604028) Calling .GetSSHUsername
I1005 20:14:02.294105  211578 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17363-196818/.minikube/machines/functional-604028/id_rsa Username:docker}
I1005 20:14:02.385065  211578 ssh_runner.go:195] Run: sudo crictl images --output json
I1005 20:14:02.459428  211578 main.go:141] libmachine: Making call to close driver server
I1005 20:14:02.459457  211578 main.go:141] libmachine: (functional-604028) Calling .Close
I1005 20:14:02.459745  211578 main.go:141] libmachine: Successfully made call to close driver server
I1005 20:14:02.459776  211578 main.go:141] libmachine: Making call to close connection to plugin binary
I1005 20:14:02.459786  211578 main.go:141] libmachine: Making call to close driver server
I1005 20:14:02.459785  211578 main.go:141] libmachine: (functional-604028) DBG | Closing plugin on server side
I1005 20:14:02.459795  211578 main.go:141] libmachine: (functional-604028) Calling .Close
I1005 20:14:02.460061  211578 main.go:141] libmachine: Successfully made call to close driver server
I1005 20:14:02.460085  211578 main.go:141] libmachine: Making call to close connection to plugin binary
I1005 20:14:02.460087  211578 main.go:141] libmachine: (functional-604028) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-604028 image ls --format json --alsologtostderr:
[{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-604028"],"size":"10823156"},{"id":"sha256:65603941520cba0f8efba3dceaab58d375851061866272e1d0ba652f62f45d8a","repoDigests":[],"repoTags":["localhost/my-image:functional-604028"],"size":"774904"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0","repoDigests":["registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35
c78379f84f5daf5b85d43014a9940d697afcf"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.2"],"size":"24558871"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:61395b4c586da2b9b3b7ca903ea6a448e6783dfdd7f768ff2c1a0f3360aaba99","repoDigests":["docker.io/library/nginx@sha256:32da30332506740a2f7c34d5dc70467b7f14ec67d912703568daff790ab3f755"],"repoTags":["docker.io/library/nginx:latest"],"size":"70481054"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kind
est/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"27737299"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"102894559"},{"id":"sha256:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.2"],"size":"18811134"},{"id":"sha256:d81db85f6f1358de02f39f97faea
581ba8c66d360de56a6ee2ee84823bc94161","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-604028"],"size":"1006"},{"id":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"16190758"},{"id":"sha256:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce","repoDigests":["registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.2"],"size":"34662976"},{"id":"sha256:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.2"],"size":"33395782"},{"id":"sha256:e6f1816883972d4be47bd
48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-604028 image ls --format json --alsologtostderr:
I1005 20:14:02.005991  211555 out.go:296] Setting OutFile to fd 1 ...
I1005 20:14:02.006275  211555 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 20:14:02.006285  211555 out.go:309] Setting ErrFile to fd 2...
I1005 20:14:02.006290  211555 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 20:14:02.006515  211555 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-196818/.minikube/bin
I1005 20:14:02.007067  211555 config.go:182] Loaded profile config "functional-604028": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
I1005 20:14:02.007165  211555 config.go:182] Loaded profile config "functional-604028": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
I1005 20:14:02.007557  211555 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1005 20:14:02.007605  211555 main.go:141] libmachine: Launching plugin server for driver kvm2
I1005 20:14:02.022744  211555 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34687
I1005 20:14:02.023230  211555 main.go:141] libmachine: () Calling .GetVersion
I1005 20:14:02.023945  211555 main.go:141] libmachine: Using API Version  1
I1005 20:14:02.023984  211555 main.go:141] libmachine: () Calling .SetConfigRaw
I1005 20:14:02.024364  211555 main.go:141] libmachine: () Calling .GetMachineName
I1005 20:14:02.024577  211555 main.go:141] libmachine: (functional-604028) Calling .GetState
I1005 20:14:02.026581  211555 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1005 20:14:02.026634  211555 main.go:141] libmachine: Launching plugin server for driver kvm2
I1005 20:14:02.041632  211555 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34775
I1005 20:14:02.042117  211555 main.go:141] libmachine: () Calling .GetVersion
I1005 20:14:02.042713  211555 main.go:141] libmachine: Using API Version  1
I1005 20:14:02.042740  211555 main.go:141] libmachine: () Calling .SetConfigRaw
I1005 20:14:02.043075  211555 main.go:141] libmachine: () Calling .GetMachineName
I1005 20:14:02.043297  211555 main.go:141] libmachine: (functional-604028) Calling .DriverName
I1005 20:14:02.043533  211555 ssh_runner.go:195] Run: systemctl --version
I1005 20:14:02.043559  211555 main.go:141] libmachine: (functional-604028) Calling .GetSSHHostname
I1005 20:14:02.046355  211555 main.go:141] libmachine: (functional-604028) DBG | domain functional-604028 has defined MAC address 52:54:00:f5:a0:dd in network mk-functional-604028
I1005 20:14:02.046780  211555 main.go:141] libmachine: (functional-604028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a0:dd", ip: ""} in network mk-functional-604028: {Iface:virbr1 ExpiryTime:2023-10-05 21:10:51 +0000 UTC Type:0 Mac:52:54:00:f5:a0:dd Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-604028 Clientid:01:52:54:00:f5:a0:dd}
I1005 20:14:02.046816  211555 main.go:141] libmachine: (functional-604028) DBG | domain functional-604028 has defined IP address 192.168.39.38 and MAC address 52:54:00:f5:a0:dd in network mk-functional-604028
I1005 20:14:02.046949  211555 main.go:141] libmachine: (functional-604028) Calling .GetSSHPort
I1005 20:14:02.047155  211555 main.go:141] libmachine: (functional-604028) Calling .GetSSHKeyPath
I1005 20:14:02.047316  211555 main.go:141] libmachine: (functional-604028) Calling .GetSSHUsername
I1005 20:14:02.047477  211555 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17363-196818/.minikube/machines/functional-604028/id_rsa Username:docker}
I1005 20:14:02.145356  211555 ssh_runner.go:195] Run: sudo crictl images --output json
I1005 20:14:02.198909  211555 main.go:141] libmachine: Making call to close driver server
I1005 20:14:02.198923  211555 main.go:141] libmachine: (functional-604028) Calling .Close
I1005 20:14:02.199221  211555 main.go:141] libmachine: Successfully made call to close driver server
I1005 20:14:02.199242  211555 main.go:141] libmachine: Making call to close connection to plugin binary
I1005 20:14:02.199262  211555 main.go:141] libmachine: Making call to close driver server
I1005 20:14:02.199273  211555 main.go:141] libmachine: (functional-604028) Calling .Close
I1005 20:14:02.199533  211555 main.go:141] libmachine: Successfully made call to close driver server
I1005 20:14:02.199559  211555 main.go:141] libmachine: Making call to close connection to plugin binary
I1005 20:14:02.199564  211555 main.go:141] libmachine: (functional-604028) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-604028 image ls --format yaml --alsologtostderr:
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0
repoDigests:
- registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf
repoTags:
- registry.k8s.io/kube-proxy:v1.28.2
size: "24558871"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "27737299"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.2
size: "34662976"
- id: sha256:d81db85f6f1358de02f39f97faea581ba8c66d360de56a6ee2ee84823bc94161
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-604028
size: "1006"
- id: sha256:61395b4c586da2b9b3b7ca903ea6a448e6783dfdd7f768ff2c1a0f3360aaba99
repoDigests:
- docker.io/library/nginx@sha256:32da30332506740a2f7c34d5dc70467b7f14ec67d912703568daff790ab3f755
repoTags:
- docker.io/library/nginx:latest
size: "70481054"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-604028
size: "10823156"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "16190758"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "102894559"
- id: sha256:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.2
size: "33395782"
- id: sha256:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.2
size: "18811134"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-604028 image ls --format yaml --alsologtostderr:
I1005 20:13:57.900804  211443 out.go:296] Setting OutFile to fd 1 ...
I1005 20:13:57.901095  211443 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 20:13:57.901104  211443 out.go:309] Setting ErrFile to fd 2...
I1005 20:13:57.901109  211443 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 20:13:57.901305  211443 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-196818/.minikube/bin
I1005 20:13:57.901888  211443 config.go:182] Loaded profile config "functional-604028": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
I1005 20:13:57.901993  211443 config.go:182] Loaded profile config "functional-604028": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
I1005 20:13:57.902420  211443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1005 20:13:57.902483  211443 main.go:141] libmachine: Launching plugin server for driver kvm2
I1005 20:13:57.917523  211443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46255
I1005 20:13:57.917995  211443 main.go:141] libmachine: () Calling .GetVersion
I1005 20:13:57.918623  211443 main.go:141] libmachine: Using API Version  1
I1005 20:13:57.918652  211443 main.go:141] libmachine: () Calling .SetConfigRaw
I1005 20:13:57.918999  211443 main.go:141] libmachine: () Calling .GetMachineName
I1005 20:13:57.919212  211443 main.go:141] libmachine: (functional-604028) Calling .GetState
I1005 20:13:57.921194  211443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1005 20:13:57.921242  211443 main.go:141] libmachine: Launching plugin server for driver kvm2
I1005 20:13:57.935622  211443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41745
I1005 20:13:57.936113  211443 main.go:141] libmachine: () Calling .GetVersion
I1005 20:13:57.936569  211443 main.go:141] libmachine: Using API Version  1
I1005 20:13:57.936593  211443 main.go:141] libmachine: () Calling .SetConfigRaw
I1005 20:13:57.936913  211443 main.go:141] libmachine: () Calling .GetMachineName
I1005 20:13:57.937102  211443 main.go:141] libmachine: (functional-604028) Calling .DriverName
I1005 20:13:57.937339  211443 ssh_runner.go:195] Run: systemctl --version
I1005 20:13:57.937364  211443 main.go:141] libmachine: (functional-604028) Calling .GetSSHHostname
I1005 20:13:57.939992  211443 main.go:141] libmachine: (functional-604028) DBG | domain functional-604028 has defined MAC address 52:54:00:f5:a0:dd in network mk-functional-604028
I1005 20:13:57.940420  211443 main.go:141] libmachine: (functional-604028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a0:dd", ip: ""} in network mk-functional-604028: {Iface:virbr1 ExpiryTime:2023-10-05 21:10:51 +0000 UTC Type:0 Mac:52:54:00:f5:a0:dd Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-604028 Clientid:01:52:54:00:f5:a0:dd}
I1005 20:13:57.940449  211443 main.go:141] libmachine: (functional-604028) DBG | domain functional-604028 has defined IP address 192.168.39.38 and MAC address 52:54:00:f5:a0:dd in network mk-functional-604028
I1005 20:13:57.940607  211443 main.go:141] libmachine: (functional-604028) Calling .GetSSHPort
I1005 20:13:57.940827  211443 main.go:141] libmachine: (functional-604028) Calling .GetSSHKeyPath
I1005 20:13:57.940977  211443 main.go:141] libmachine: (functional-604028) Calling .GetSSHUsername
I1005 20:13:57.941146  211443 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17363-196818/.minikube/machines/functional-604028/id_rsa Username:docker}
I1005 20:13:58.052114  211443 ssh_runner.go:195] Run: sudo crictl images --output json
I1005 20:13:58.131117  211443 main.go:141] libmachine: Making call to close driver server
I1005 20:13:58.131137  211443 main.go:141] libmachine: (functional-604028) Calling .Close
I1005 20:13:58.131532  211443 main.go:141] libmachine: (functional-604028) DBG | Closing plugin on server side
I1005 20:13:58.131631  211443 main.go:141] libmachine: Successfully made call to close driver server
I1005 20:13:58.131656  211443 main.go:141] libmachine: Making call to close connection to plugin binary
I1005 20:13:58.131674  211443 main.go:141] libmachine: Making call to close driver server
I1005 20:13:58.131684  211443 main.go:141] libmachine: (functional-604028) Calling .Close
I1005 20:13:58.131970  211443 main.go:141] libmachine: (functional-604028) DBG | Closing plugin on server side
I1005 20:13:58.132006  211443 main.go:141] libmachine: Successfully made call to close driver server
I1005 20:13:58.132023  211443 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-604028 ssh pgrep buildkitd: exit status 1 (202.950057ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 image build -t localhost/my-image:functional-604028 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-604028 image build -t localhost/my-image:functional-604028 testdata/build --alsologtostderr: (3.362337349s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-604028 image build -t localhost/my-image:functional-604028 testdata/build --alsologtostderr:
I1005 20:13:58.381280  211497 out.go:296] Setting OutFile to fd 1 ...
I1005 20:13:58.381575  211497 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 20:13:58.381586  211497 out.go:309] Setting ErrFile to fd 2...
I1005 20:13:58.381590  211497 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 20:13:58.381790  211497 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-196818/.minikube/bin
I1005 20:13:58.382402  211497 config.go:182] Loaded profile config "functional-604028": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
I1005 20:13:58.382944  211497 config.go:182] Loaded profile config "functional-604028": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
I1005 20:13:58.383313  211497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1005 20:13:58.383369  211497 main.go:141] libmachine: Launching plugin server for driver kvm2
I1005 20:13:58.398831  211497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41359
I1005 20:13:58.399338  211497 main.go:141] libmachine: () Calling .GetVersion
I1005 20:13:58.400042  211497 main.go:141] libmachine: Using API Version  1
I1005 20:13:58.400079  211497 main.go:141] libmachine: () Calling .SetConfigRaw
I1005 20:13:58.400552  211497 main.go:141] libmachine: () Calling .GetMachineName
I1005 20:13:58.400777  211497 main.go:141] libmachine: (functional-604028) Calling .GetState
I1005 20:13:58.403141  211497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1005 20:13:58.403184  211497 main.go:141] libmachine: Launching plugin server for driver kvm2
I1005 20:13:58.418500  211497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42661
I1005 20:13:58.418925  211497 main.go:141] libmachine: () Calling .GetVersion
I1005 20:13:58.419414  211497 main.go:141] libmachine: Using API Version  1
I1005 20:13:58.419437  211497 main.go:141] libmachine: () Calling .SetConfigRaw
I1005 20:13:58.419737  211497 main.go:141] libmachine: () Calling .GetMachineName
I1005 20:13:58.419932  211497 main.go:141] libmachine: (functional-604028) Calling .DriverName
I1005 20:13:58.420187  211497 ssh_runner.go:195] Run: systemctl --version
I1005 20:13:58.420230  211497 main.go:141] libmachine: (functional-604028) Calling .GetSSHHostname
I1005 20:13:58.423287  211497 main.go:141] libmachine: (functional-604028) DBG | domain functional-604028 has defined MAC address 52:54:00:f5:a0:dd in network mk-functional-604028
I1005 20:13:58.423738  211497 main.go:141] libmachine: (functional-604028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:a0:dd", ip: ""} in network mk-functional-604028: {Iface:virbr1 ExpiryTime:2023-10-05 21:10:51 +0000 UTC Type:0 Mac:52:54:00:f5:a0:dd Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-604028 Clientid:01:52:54:00:f5:a0:dd}
I1005 20:13:58.423812  211497 main.go:141] libmachine: (functional-604028) DBG | domain functional-604028 has defined IP address 192.168.39.38 and MAC address 52:54:00:f5:a0:dd in network mk-functional-604028
I1005 20:13:58.423951  211497 main.go:141] libmachine: (functional-604028) Calling .GetSSHPort
I1005 20:13:58.424185  211497 main.go:141] libmachine: (functional-604028) Calling .GetSSHKeyPath
I1005 20:13:58.424368  211497 main.go:141] libmachine: (functional-604028) Calling .GetSSHUsername
I1005 20:13:58.424522  211497 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17363-196818/.minikube/machines/functional-604028/id_rsa Username:docker}
I1005 20:13:58.519634  211497 build_images.go:151] Building image from path: /tmp/build.1546271691.tar
I1005 20:13:58.519737  211497 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1005 20:13:58.536253  211497 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1546271691.tar
I1005 20:13:58.541432  211497 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1546271691.tar: stat -c "%s %y" /var/lib/minikube/build/build.1546271691.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1546271691.tar': No such file or directory
I1005 20:13:58.541508  211497 ssh_runner.go:362] scp /tmp/build.1546271691.tar --> /var/lib/minikube/build/build.1546271691.tar (3072 bytes)
I1005 20:13:58.576071  211497 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1546271691
I1005 20:13:58.588111  211497 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1546271691 -xf /var/lib/minikube/build/build.1546271691.tar
I1005 20:13:58.598401  211497 containerd.go:378] Building image: /var/lib/minikube/build/build.1546271691
I1005 20:13:58.598510  211497 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1546271691 --local dockerfile=/var/lib/minikube/build/build.1546271691 --output type=image,name=localhost/my-image:functional-604028
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 29B
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B 0.0s done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 DONE 0.3s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:98c49f674aa475b64a965a70e3b42d0472b39e68a427516eca2225b97dd23272 0.0s done
#8 exporting config sha256:65603941520cba0f8efba3dceaab58d375851061866272e1d0ba652f62f45d8a 0.0s done
#8 naming to localhost/my-image:functional-604028 done
#8 DONE 0.2s
I1005 20:14:01.673201  211497 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1546271691 --local dockerfile=/var/lib/minikube/build/build.1546271691 --output type=image,name=localhost/my-image:functional-604028: (3.074630278s)
I1005 20:14:01.673292  211497 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1546271691
I1005 20:14:01.685623  211497 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1546271691.tar
I1005 20:14:01.697659  211497 build_images.go:207] Built localhost/my-image:functional-604028 from /tmp/build.1546271691.tar
I1005 20:14:01.697710  211497 build_images.go:123] succeeded building to: functional-604028
I1005 20:14:01.697717  211497 build_images.go:124] failed building to: 
I1005 20:14:01.697748  211497 main.go:141] libmachine: Making call to close driver server
I1005 20:14:01.697766  211497 main.go:141] libmachine: (functional-604028) Calling .Close
I1005 20:14:01.698095  211497 main.go:141] libmachine: Successfully made call to close driver server
I1005 20:14:01.698124  211497 main.go:141] libmachine: Making call to close connection to plugin binary
I1005 20:14:01.698136  211497 main.go:141] libmachine: Making call to close driver server
I1005 20:14:01.698146  211497 main.go:141] libmachine: (functional-604028) Calling .Close
I1005 20:14:01.698499  211497 main.go:141] libmachine: Successfully made call to close driver server
I1005 20:14:01.698527  211497 main.go:141] libmachine: Making call to close connection to plugin binary
I1005 20:14:01.698555  211497 main.go:141] libmachine: (functional-604028) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (4.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (4.596451477s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-604028
--- PASS: TestFunctional/parallel/ImageCommands/Setup (4.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 image load --daemon gcr.io/google-containers/addon-resizer:functional-604028 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-604028 image load --daemon gcr.io/google-containers/addon-resizer:functional-604028 --alsologtostderr: (4.830790747s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 image load --daemon gcr.io/google-containers/addon-resizer:functional-604028 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-604028 image load --daemon gcr.io/google-containers/addon-resizer:functional-604028 --alsologtostderr: (5.475303496s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.73s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-604028 /tmp/TestFunctionalparallelMountCmdany-port831188717/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1696536820130404818" to /tmp/TestFunctionalparallelMountCmdany-port831188717/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1696536820130404818" to /tmp/TestFunctionalparallelMountCmdany-port831188717/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1696536820130404818" to /tmp/TestFunctionalparallelMountCmdany-port831188717/001/test-1696536820130404818
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-604028 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (235.601601ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  5 20:13 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  5 20:13 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  5 20:13 test-1696536820130404818
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 ssh cat /mount-9p/test-1696536820130404818
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-604028 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [cd517707-3dff-4bed-9b96-7c88c428f10c] Pending
helpers_test.go:344: "busybox-mount" [cd517707-3dff-4bed-9b96-7c88c428f10c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [cd517707-3dff-4bed-9b96-7c88c428f10c] Running
helpers_test.go:344: "busybox-mount" [cd517707-3dff-4bed-9b96-7c88c428f10c] Running: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [cd517707-3dff-4bed-9b96-7c88c428f10c] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.013983176s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-604028 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-604028 /tmp/TestFunctionalparallelMountCmdany-port831188717/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-604028
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 image load --daemon gcr.io/google-containers/addon-resizer:functional-604028 --alsologtostderr
2023/10/05 20:13:43 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-604028 image load --daemon gcr.io/google-containers/addon-resizer:functional-604028 --alsologtostderr: (4.946815336s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 image save gcr.io/google-containers/addon-resizer:functional-604028 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-604028 image save gcr.io/google-containers/addon-resizer:functional-604028 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.861327636s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 image rm gcr.io/google-containers/addon-resizer:functional-604028 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-604028 /tmp/TestFunctionalparallelMountCmdspecific-port1416998628/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-604028 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (246.154675ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-604028 /tmp/TestFunctionalparallelMountCmdspecific-port1416998628/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-604028 ssh "sudo umount -f /mount-9p": exit status 1 (217.332511ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-604028 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-604028 /tmp/TestFunctionalparallelMountCmdspecific-port1416998628/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-604028 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (2.29876814s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.57s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-604028 /tmp/TestFunctionalparallelMountCmdVerifyCleanup35513352/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-604028 /tmp/TestFunctionalparallelMountCmdVerifyCleanup35513352/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-604028 /tmp/TestFunctionalparallelMountCmdVerifyCleanup35513352/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-604028 ssh "findmnt -T" /mount2: exit status 1 (205.3377ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-604028 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-604028 /tmp/TestFunctionalparallelMountCmdVerifyCleanup35513352/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-604028 /tmp/TestFunctionalparallelMountCmdVerifyCleanup35513352/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-604028 /tmp/TestFunctionalparallelMountCmdVerifyCleanup35513352/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-604028
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 image save --daemon gcr.io/google-containers/addon-resizer:functional-604028 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-604028 image save --daemon gcr.io/google-containers/addon-resizer:functional-604028 --alsologtostderr: (1.875314568s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-604028
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-604028 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-604028 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-dfzgx" [541763bf-4761-49c9-b365-6250821eef56] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-dfzgx" [541763bf-4761-49c9-b365-6250821eef56] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.016454018s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "249.660803ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "42.496688ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "248.683941ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "41.83036ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 service list
functional_test.go:1458: (dbg) Done: out/minikube-linux-amd64 -p functional-604028 service list: (1.246945742s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 service list -o json
functional_test.go:1488: (dbg) Done: out/minikube-linux-amd64 -p functional-604028 service list -o json: (1.39261215s)
functional_test.go:1493: Took "1.392701567s" to run "out/minikube-linux-amd64 -p functional-604028 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.39.38:32333
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-604028 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.39.38:32333
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-604028
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-604028
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-604028
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (83.16s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-544209 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-544209 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m23.158795875s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (83.16s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.02s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-544209 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-544209 addons enable ingress --alsologtostderr -v=5: (11.020010509s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.63s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-544209 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.63s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (38.53s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:205: (dbg) Run:  kubectl --context ingress-addon-legacy-544209 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:205: (dbg) Done: kubectl --context ingress-addon-legacy-544209 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (11.214383852s)
addons_test.go:230: (dbg) Run:  kubectl --context ingress-addon-legacy-544209 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context ingress-addon-legacy-544209 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [220b3936-873b-4c41-a0a0-51d91d5697d0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [220b3936-873b-4c41-a0a0-51d91d5697d0] Running
addons_test.go:248: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.023714539s
addons_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-544209 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context ingress-addon-legacy-544209 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-544209 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.39.215
addons_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-544209 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-544209 addons disable ingress-dns --alsologtostderr -v=1: (9.435016093s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-544209 addons disable ingress --alsologtostderr -v=1
E1005 20:16:29.275740  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/client.crt: no such file or directory
addons_test.go:309: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-544209 addons disable ingress --alsologtostderr -v=1: (7.578834553s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (38.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (117.44s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-409041 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
E1005 20:16:56.963504  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/client.crt: no such file or directory
E1005 20:18:25.306421  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/functional-604028/client.crt: no such file or directory
E1005 20:18:25.311715  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/functional-604028/client.crt: no such file or directory
E1005 20:18:25.322069  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/functional-604028/client.crt: no such file or directory
E1005 20:18:25.342387  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/functional-604028/client.crt: no such file or directory
E1005 20:18:25.382755  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/functional-604028/client.crt: no such file or directory
E1005 20:18:25.463119  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/functional-604028/client.crt: no such file or directory
E1005 20:18:25.623564  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/functional-604028/client.crt: no such file or directory
E1005 20:18:25.944342  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/functional-604028/client.crt: no such file or directory
E1005 20:18:26.585289  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/functional-604028/client.crt: no such file or directory
E1005 20:18:27.866429  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/functional-604028/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-409041 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m57.439785289s)
--- PASS: TestJSONOutput/start/Command (117.44s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-409041 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-409041 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.1s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-409041 --output=json --user=testUser
E1005 20:18:30.426969  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/functional-604028/client.crt: no such file or directory
E1005 20:18:35.547306  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/functional-604028/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-409041 --output=json --user=testUser: (7.09812588s)
--- PASS: TestJSONOutput/stop/Command (7.10s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-229347 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-229347 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (60.505645ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"33aa3d0e-aabb-4e8b-8af7-e17a87ea35b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-229347] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fba58c9d-5b83-4b0d-856c-db74d6e0a5a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17363"}}
	{"specversion":"1.0","id":"ee262ea3-7c8f-4871-b8ce-f7e8c72aa693","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5beba649-8d4d-4fc4-9c80-983c5efdff93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17363-196818/kubeconfig"}}
	{"specversion":"1.0","id":"263a94cd-d07a-48d3-9cc2-3a1685d200db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-196818/.minikube"}}
	{"specversion":"1.0","id":"0562783e-0279-4d2a-9dd4-5d9014f56543","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"2c5f5cef-6dbc-4c33-921c-0e7a2753a999","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4ffce36f-0300-417d-84dc-4e7c5433780b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-229347" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-229347
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (135.13s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-445490 --driver=kvm2  --container-runtime=containerd
E1005 20:18:45.788560  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/functional-604028/client.crt: no such file or directory
E1005 20:19:06.269418  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/functional-604028/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-445490 --driver=kvm2  --container-runtime=containerd: (1m3.438195112s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-447661 --driver=kvm2  --container-runtime=containerd
E1005 20:19:47.230338  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/functional-604028/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-447661 --driver=kvm2  --container-runtime=containerd: (1m8.960247719s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-445490
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-447661
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-447661" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-447661
E1005 20:20:52.162164  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/ingress-addon-legacy-544209/client.crt: no such file or directory
E1005 20:20:52.167474  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/ingress-addon-legacy-544209/client.crt: no such file or directory
E1005 20:20:52.177790  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/ingress-addon-legacy-544209/client.crt: no such file or directory
E1005 20:20:52.198051  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/ingress-addon-legacy-544209/client.crt: no such file or directory
E1005 20:20:52.238412  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/ingress-addon-legacy-544209/client.crt: no such file or directory
E1005 20:20:52.318761  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/ingress-addon-legacy-544209/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-447661: (1.025621861s)
helpers_test.go:175: Cleaning up "first-445490" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-445490
E1005 20:20:52.479272  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/ingress-addon-legacy-544209/client.crt: no such file or directory
E1005 20:20:52.800281  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/ingress-addon-legacy-544209/client.crt: no such file or directory
--- PASS: TestMinikubeProfile (135.13s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.72s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-282146 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E1005 20:20:53.440754  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/ingress-addon-legacy-544209/client.crt: no such file or directory
E1005 20:20:54.721271  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/ingress-addon-legacy-544209/client.crt: no such file or directory
E1005 20:20:57.282869  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/ingress-addon-legacy-544209/client.crt: no such file or directory
E1005 20:21:02.404082  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/ingress-addon-legacy-544209/client.crt: no such file or directory
E1005 20:21:09.150554  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/functional-604028/client.crt: no such file or directory
E1005 20:21:12.644427  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/ingress-addon-legacy-544209/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-282146 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (27.724097317s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-282146 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-282146 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.54s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-301056 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E1005 20:21:29.276112  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/client.crt: no such file or directory
E1005 20:21:33.125534  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/ingress-addon-legacy-544209/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-301056 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (27.536795836s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.54s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-301056 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-301056 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-282146 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-301056 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-301056 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.16s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-301056
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-301056: (1.164508739s)
--- PASS: TestMountStart/serial/Stop (1.16s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.35s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-301056
E1005 20:22:14.086073  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/ingress-addon-legacy-544209/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-301056: (21.345084015s)
--- PASS: TestMountStart/serial/RestartStopped (22.35s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-301056 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-301056 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (128.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-876260 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E1005 20:23:25.305339  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/functional-604028/client.crt: no such file or directory
E1005 20:23:36.007226  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/ingress-addon-legacy-544209/client.crt: no such file or directory
E1005 20:23:52.990768  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/functional-604028/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-876260 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m8.153775501s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (128.57s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-876260 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-876260 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-876260 -- rollout status deployment/busybox: (2.395992299s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-876260 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-876260 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-876260 -- exec busybox-5bc68d56bd-bb66l -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-876260 -- exec busybox-5bc68d56bd-shpcx -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-876260 -- exec busybox-5bc68d56bd-bb66l -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-876260 -- exec busybox-5bc68d56bd-shpcx -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-876260 -- exec busybox-5bc68d56bd-bb66l -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-876260 -- exec busybox-5bc68d56bd-shpcx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.12s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-876260 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-876260 -- exec busybox-5bc68d56bd-bb66l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-876260 -- exec busybox-5bc68d56bd-bb66l -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-876260 -- exec busybox-5bc68d56bd-shpcx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-876260 -- exec busybox-5bc68d56bd-shpcx -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-876260 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-876260 -v 3 --alsologtostderr: (42.047399097s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.69s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 cp testdata/cp-test.txt multinode-876260:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 ssh -n multinode-876260 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 cp multinode-876260:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1206106781/001/cp-test_multinode-876260.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 ssh -n multinode-876260 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 cp multinode-876260:/home/docker/cp-test.txt multinode-876260-m02:/home/docker/cp-test_multinode-876260_multinode-876260-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 ssh -n multinode-876260 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 ssh -n multinode-876260-m02 "sudo cat /home/docker/cp-test_multinode-876260_multinode-876260-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 cp multinode-876260:/home/docker/cp-test.txt multinode-876260-m03:/home/docker/cp-test_multinode-876260_multinode-876260-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 ssh -n multinode-876260 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 ssh -n multinode-876260-m03 "sudo cat /home/docker/cp-test_multinode-876260_multinode-876260-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 cp testdata/cp-test.txt multinode-876260-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 ssh -n multinode-876260-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 cp multinode-876260-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1206106781/001/cp-test_multinode-876260-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 ssh -n multinode-876260-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 cp multinode-876260-m02:/home/docker/cp-test.txt multinode-876260:/home/docker/cp-test_multinode-876260-m02_multinode-876260.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 ssh -n multinode-876260-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 ssh -n multinode-876260 "sudo cat /home/docker/cp-test_multinode-876260-m02_multinode-876260.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 cp multinode-876260-m02:/home/docker/cp-test.txt multinode-876260-m03:/home/docker/cp-test_multinode-876260-m02_multinode-876260-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 ssh -n multinode-876260-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 ssh -n multinode-876260-m03 "sudo cat /home/docker/cp-test_multinode-876260-m02_multinode-876260-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 cp testdata/cp-test.txt multinode-876260-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 ssh -n multinode-876260-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 cp multinode-876260-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1206106781/001/cp-test_multinode-876260-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 ssh -n multinode-876260-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 cp multinode-876260-m03:/home/docker/cp-test.txt multinode-876260:/home/docker/cp-test_multinode-876260-m03_multinode-876260.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 ssh -n multinode-876260-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 ssh -n multinode-876260 "sudo cat /home/docker/cp-test_multinode-876260-m03_multinode-876260.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 cp multinode-876260-m03:/home/docker/cp-test.txt multinode-876260-m02:/home/docker/cp-test_multinode-876260-m03_multinode-876260-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 ssh -n multinode-876260-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 ssh -n multinode-876260-m02 "sudo cat /home/docker/cp-test_multinode-876260-m03_multinode-876260-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.88s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-876260 node stop m03: (1.390469983s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-876260 status: exit status 7 (478.492587ms)

                                                
                                                
-- stdout --
	multinode-876260
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-876260-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-876260-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-876260 status --alsologtostderr: exit status 7 (461.415673ms)

                                                
                                                
-- stdout --
	multinode-876260
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-876260-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-876260-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1005 20:25:23.258682  218795 out.go:296] Setting OutFile to fd 1 ...
	I1005 20:25:23.258804  218795 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:25:23.258808  218795 out.go:309] Setting ErrFile to fd 2...
	I1005 20:25:23.258813  218795 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:25:23.258988  218795 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-196818/.minikube/bin
	I1005 20:25:23.259148  218795 out.go:303] Setting JSON to false
	I1005 20:25:23.259182  218795 mustload.go:65] Loading cluster: multinode-876260
	I1005 20:25:23.259257  218795 notify.go:220] Checking for updates...
	I1005 20:25:23.259619  218795 config.go:182] Loaded profile config "multinode-876260": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
	I1005 20:25:23.259639  218795 status.go:255] checking status of multinode-876260 ...
	I1005 20:25:23.260038  218795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:25:23.260114  218795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:25:23.277709  218795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42179
	I1005 20:25:23.278308  218795 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:25:23.279006  218795 main.go:141] libmachine: Using API Version  1
	I1005 20:25:23.279032  218795 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:25:23.279480  218795 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:25:23.279769  218795 main.go:141] libmachine: (multinode-876260) Calling .GetState
	I1005 20:25:23.281772  218795 status.go:330] multinode-876260 host status = "Running" (err=<nil>)
	I1005 20:25:23.281800  218795 host.go:66] Checking if "multinode-876260" exists ...
	I1005 20:25:23.282290  218795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:25:23.282341  218795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:25:23.298749  218795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38341
	I1005 20:25:23.299205  218795 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:25:23.299849  218795 main.go:141] libmachine: Using API Version  1
	I1005 20:25:23.299931  218795 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:25:23.300298  218795 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:25:23.300509  218795 main.go:141] libmachine: (multinode-876260) Calling .GetIP
	I1005 20:25:23.304001  218795 main.go:141] libmachine: (multinode-876260) DBG | domain multinode-876260 has defined MAC address 52:54:00:5f:7b:84 in network mk-multinode-876260
	I1005 20:25:23.304460  218795 main.go:141] libmachine: (multinode-876260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:7b:84", ip: ""} in network mk-multinode-876260: {Iface:virbr1 ExpiryTime:2023-10-05 21:22:32 +0000 UTC Type:0 Mac:52:54:00:5f:7b:84 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-876260 Clientid:01:52:54:00:5f:7b:84}
	I1005 20:25:23.304513  218795 main.go:141] libmachine: (multinode-876260) DBG | domain multinode-876260 has defined IP address 192.168.39.196 and MAC address 52:54:00:5f:7b:84 in network mk-multinode-876260
	I1005 20:25:23.304762  218795 host.go:66] Checking if "multinode-876260" exists ...
	I1005 20:25:23.305090  218795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:25:23.305137  218795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:25:23.321403  218795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39789
	I1005 20:25:23.321910  218795 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:25:23.322567  218795 main.go:141] libmachine: Using API Version  1
	I1005 20:25:23.322598  218795 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:25:23.323030  218795 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:25:23.323231  218795 main.go:141] libmachine: (multinode-876260) Calling .DriverName
	I1005 20:25:23.323494  218795 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1005 20:25:23.323530  218795 main.go:141] libmachine: (multinode-876260) Calling .GetSSHHostname
	I1005 20:25:23.327416  218795 main.go:141] libmachine: (multinode-876260) DBG | domain multinode-876260 has defined MAC address 52:54:00:5f:7b:84 in network mk-multinode-876260
	I1005 20:25:23.328109  218795 main.go:141] libmachine: (multinode-876260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:7b:84", ip: ""} in network mk-multinode-876260: {Iface:virbr1 ExpiryTime:2023-10-05 21:22:32 +0000 UTC Type:0 Mac:52:54:00:5f:7b:84 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-876260 Clientid:01:52:54:00:5f:7b:84}
	I1005 20:25:23.328143  218795 main.go:141] libmachine: (multinode-876260) DBG | domain multinode-876260 has defined IP address 192.168.39.196 and MAC address 52:54:00:5f:7b:84 in network mk-multinode-876260
	I1005 20:25:23.328327  218795 main.go:141] libmachine: (multinode-876260) Calling .GetSSHPort
	I1005 20:25:23.328638  218795 main.go:141] libmachine: (multinode-876260) Calling .GetSSHKeyPath
	I1005 20:25:23.328862  218795 main.go:141] libmachine: (multinode-876260) Calling .GetSSHUsername
	I1005 20:25:23.329069  218795 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17363-196818/.minikube/machines/multinode-876260/id_rsa Username:docker}
	I1005 20:25:23.422766  218795 ssh_runner.go:195] Run: systemctl --version
	I1005 20:25:23.430045  218795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 20:25:23.446666  218795 kubeconfig.go:92] found "multinode-876260" server: "https://192.168.39.196:8443"
	I1005 20:25:23.446707  218795 api_server.go:166] Checking apiserver status ...
	I1005 20:25:23.446777  218795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1005 20:25:23.461127  218795 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup
	I1005 20:25:23.472279  218795 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/podabd5807f24633d9bd8b186c70a9ff4fb/ec5032fc084633c43f3bc85e8486f249785127f14bee67c8b3855824a42a589e"
	I1005 20:25:23.472358  218795 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podabd5807f24633d9bd8b186c70a9ff4fb/ec5032fc084633c43f3bc85e8486f249785127f14bee67c8b3855824a42a589e/freezer.state
	I1005 20:25:23.483249  218795 api_server.go:204] freezer state: "THAWED"
	I1005 20:25:23.483282  218795 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I1005 20:25:23.489468  218795 api_server.go:279] https://192.168.39.196:8443/healthz returned 200:
	ok
	I1005 20:25:23.489500  218795 status.go:421] multinode-876260 apiserver status = Running (err=<nil>)
	I1005 20:25:23.489511  218795 status.go:257] multinode-876260 status: &{Name:multinode-876260 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1005 20:25:23.489529  218795 status.go:255] checking status of multinode-876260-m02 ...
	I1005 20:25:23.489903  218795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:25:23.489946  218795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:25:23.505812  218795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39215
	I1005 20:25:23.506355  218795 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:25:23.506963  218795 main.go:141] libmachine: Using API Version  1
	I1005 20:25:23.506991  218795 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:25:23.507408  218795 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:25:23.507589  218795 main.go:141] libmachine: (multinode-876260-m02) Calling .GetState
	I1005 20:25:23.509278  218795 status.go:330] multinode-876260-m02 host status = "Running" (err=<nil>)
	I1005 20:25:23.509312  218795 host.go:66] Checking if "multinode-876260-m02" exists ...
	I1005 20:25:23.509625  218795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:25:23.509675  218795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:25:23.526841  218795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40959
	I1005 20:25:23.527413  218795 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:25:23.527988  218795 main.go:141] libmachine: Using API Version  1
	I1005 20:25:23.528020  218795 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:25:23.528352  218795 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:25:23.528617  218795 main.go:141] libmachine: (multinode-876260-m02) Calling .GetIP
	I1005 20:25:23.531958  218795 main.go:141] libmachine: (multinode-876260-m02) DBG | domain multinode-876260-m02 has defined MAC address 52:54:00:6f:5b:f6 in network mk-multinode-876260
	I1005 20:25:23.532412  218795 main.go:141] libmachine: (multinode-876260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:5b:f6", ip: ""} in network mk-multinode-876260: {Iface:virbr1 ExpiryTime:2023-10-05 21:23:55 +0000 UTC Type:0 Mac:52:54:00:6f:5b:f6 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-876260-m02 Clientid:01:52:54:00:6f:5b:f6}
	I1005 20:25:23.532446  218795 main.go:141] libmachine: (multinode-876260-m02) DBG | domain multinode-876260-m02 has defined IP address 192.168.39.212 and MAC address 52:54:00:6f:5b:f6 in network mk-multinode-876260
	I1005 20:25:23.532667  218795 host.go:66] Checking if "multinode-876260-m02" exists ...
	I1005 20:25:23.533177  218795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:25:23.533241  218795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:25:23.549103  218795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41391
	I1005 20:25:23.549596  218795 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:25:23.550132  218795 main.go:141] libmachine: Using API Version  1
	I1005 20:25:23.550164  218795 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:25:23.550566  218795 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:25:23.550758  218795 main.go:141] libmachine: (multinode-876260-m02) Calling .DriverName
	I1005 20:25:23.550957  218795 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1005 20:25:23.550983  218795 main.go:141] libmachine: (multinode-876260-m02) Calling .GetSSHHostname
	I1005 20:25:23.554074  218795 main.go:141] libmachine: (multinode-876260-m02) DBG | domain multinode-876260-m02 has defined MAC address 52:54:00:6f:5b:f6 in network mk-multinode-876260
	I1005 20:25:23.554637  218795 main.go:141] libmachine: (multinode-876260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:5b:f6", ip: ""} in network mk-multinode-876260: {Iface:virbr1 ExpiryTime:2023-10-05 21:23:55 +0000 UTC Type:0 Mac:52:54:00:6f:5b:f6 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-876260-m02 Clientid:01:52:54:00:6f:5b:f6}
	I1005 20:25:23.554682  218795 main.go:141] libmachine: (multinode-876260-m02) DBG | domain multinode-876260-m02 has defined IP address 192.168.39.212 and MAC address 52:54:00:6f:5b:f6 in network mk-multinode-876260
	I1005 20:25:23.554843  218795 main.go:141] libmachine: (multinode-876260-m02) Calling .GetSSHPort
	I1005 20:25:23.555048  218795 main.go:141] libmachine: (multinode-876260-m02) Calling .GetSSHKeyPath
	I1005 20:25:23.555192  218795 main.go:141] libmachine: (multinode-876260-m02) Calling .GetSSHUsername
	I1005 20:25:23.555359  218795 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17363-196818/.minikube/machines/multinode-876260-m02/id_rsa Username:docker}
	I1005 20:25:23.642842  218795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 20:25:23.656428  218795 status.go:257] multinode-876260-m02 status: &{Name:multinode-876260-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1005 20:25:23.656467  218795 status.go:255] checking status of multinode-876260-m03 ...
	I1005 20:25:23.656835  218795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:25:23.656898  218795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:25:23.673473  218795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44661
	I1005 20:25:23.673996  218795 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:25:23.674549  218795 main.go:141] libmachine: Using API Version  1
	I1005 20:25:23.674577  218795 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:25:23.675017  218795 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:25:23.675258  218795 main.go:141] libmachine: (multinode-876260-m03) Calling .GetState
	I1005 20:25:23.676989  218795 status.go:330] multinode-876260-m03 host status = "Stopped" (err=<nil>)
	I1005 20:25:23.677009  218795 status.go:343] host is not running, skipping remaining checks
	I1005 20:25:23.677014  218795 status.go:257] multinode-876260-m03 status: &{Name:multinode-876260-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.33s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (29.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 node start m03 --alsologtostderr
E1005 20:25:52.162085  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/ingress-addon-legacy-544209/client.crt: no such file or directory
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-876260 node start m03 --alsologtostderr: (28.921378173s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (29.66s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (316.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-876260
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-876260
E1005 20:26:19.848227  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/ingress-addon-legacy-544209/client.crt: no such file or directory
E1005 20:26:29.276121  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/client.crt: no such file or directory
E1005 20:27:52.323872  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/client.crt: no such file or directory
E1005 20:28:25.305865  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/functional-604028/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-876260: (3m4.434964982s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-876260 --wait=true -v=8 --alsologtostderr
E1005 20:30:52.162757  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/ingress-addon-legacy-544209/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-876260 --wait=true -v=8 --alsologtostderr: (2m12.289903914s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-876260
--- PASS: TestMultiNode/serial/RestartKeepsNodes (316.82s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-876260 node delete m03: (1.223520813s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.76s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (183.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 stop
E1005 20:31:29.276395  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/client.crt: no such file or directory
E1005 20:33:25.305723  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/functional-604028/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-876260 stop: (3m3.272542456s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-876260 status: exit status 7 (89.931142ms)

                                                
                                                
-- stdout --
	multinode-876260
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-876260-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-876260 status --alsologtostderr: exit status 7 (89.600016ms)

                                                
                                                
-- stdout --
	multinode-876260
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-876260-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1005 20:34:15.332463  220993 out.go:296] Setting OutFile to fd 1 ...
	I1005 20:34:15.332791  220993 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:34:15.332802  220993 out.go:309] Setting ErrFile to fd 2...
	I1005 20:34:15.332807  220993 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:34:15.333034  220993 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-196818/.minikube/bin
	I1005 20:34:15.333232  220993 out.go:303] Setting JSON to false
	I1005 20:34:15.333274  220993 mustload.go:65] Loading cluster: multinode-876260
	I1005 20:34:15.333501  220993 notify.go:220] Checking for updates...
	I1005 20:34:15.333699  220993 config.go:182] Loaded profile config "multinode-876260": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
	I1005 20:34:15.333721  220993 status.go:255] checking status of multinode-876260 ...
	I1005 20:34:15.334096  220993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:34:15.334183  220993 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:34:15.349766  220993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43697
	I1005 20:34:15.350344  220993 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:34:15.351018  220993 main.go:141] libmachine: Using API Version  1
	I1005 20:34:15.351052  220993 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:34:15.351529  220993 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:34:15.351758  220993 main.go:141] libmachine: (multinode-876260) Calling .GetState
	I1005 20:34:15.355129  220993 status.go:330] multinode-876260 host status = "Stopped" (err=<nil>)
	I1005 20:34:15.355149  220993 status.go:343] host is not running, skipping remaining checks
	I1005 20:34:15.355156  220993 status.go:257] multinode-876260 status: &{Name:multinode-876260 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1005 20:34:15.355181  220993 status.go:255] checking status of multinode-876260-m02 ...
	I1005 20:34:15.355546  220993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1005 20:34:15.355608  220993 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1005 20:34:15.372085  220993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36125
	I1005 20:34:15.372800  220993 main.go:141] libmachine: () Calling .GetVersion
	I1005 20:34:15.373393  220993 main.go:141] libmachine: Using API Version  1
	I1005 20:34:15.373438  220993 main.go:141] libmachine: () Calling .SetConfigRaw
	I1005 20:34:15.373853  220993 main.go:141] libmachine: () Calling .GetMachineName
	I1005 20:34:15.374124  220993 main.go:141] libmachine: (multinode-876260-m02) Calling .GetState
	I1005 20:34:15.376429  220993 status.go:330] multinode-876260-m02 host status = "Stopped" (err=<nil>)
	I1005 20:34:15.376450  220993 status.go:343] host is not running, skipping remaining checks
	I1005 20:34:15.376456  220993 status.go:257] multinode-876260-m02 status: &{Name:multinode-876260-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (183.45s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (93.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-876260 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E1005 20:34:48.351232  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/functional-604028/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-876260 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m33.265167045s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-876260 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (93.82s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (68.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-876260
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-876260-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-876260-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (64.35647ms)

                                                
                                                
-- stdout --
	* [multinode-876260-m02] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17363-196818/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-196818/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-876260-m02' is duplicated with machine name 'multinode-876260-m02' in profile 'multinode-876260'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-876260-m03 --driver=kvm2  --container-runtime=containerd
E1005 20:35:52.164051  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/ingress-addon-legacy-544209/client.crt: no such file or directory
E1005 20:36:29.275988  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-876260-m03 --driver=kvm2  --container-runtime=containerd: (1m7.363769121s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-876260
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-876260: exit status 80 (224.559946ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-876260
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-876260-m03 already exists in multinode-876260-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-876260-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-876260-m03: (1.058421511s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (68.75s)

                                                
                                    
x
+
TestPreload (332.61s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-135557 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E1005 20:37:15.209613  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/ingress-addon-legacy-544209/client.crt: no such file or directory
E1005 20:38:25.306115  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/functional-604028/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-135557 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m26.300444909s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-135557 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-135557
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-135557: (1m31.799847645s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-135557 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
E1005 20:40:52.162759  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/ingress-addon-legacy-544209/client.crt: no such file or directory
E1005 20:41:29.276002  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-135557 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (2m32.62509012s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-135557 image list
helpers_test.go:175: Cleaning up "test-preload-135557" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-135557
--- PASS: TestPreload (332.61s)

                                                
                                    
x
+
TestScheduledStopUnix (134.44s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-188576 --memory=2048 --driver=kvm2  --container-runtime=containerd
E1005 20:43:25.306090  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/functional-604028/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-188576 --memory=2048 --driver=kvm2  --container-runtime=containerd: (1m2.769468296s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-188576 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-188576 -n scheduled-stop-188576
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-188576 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-188576 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-188576 -n scheduled-stop-188576
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-188576
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-188576 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1005 20:44:32.324622  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-188576
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-188576: exit status 7 (58.118296ms)

                                                
                                                
-- stdout --
	scheduled-stop-188576
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-188576 -n scheduled-stop-188576
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-188576 -n scheduled-stop-188576: exit status 7 (61.760427ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-188576" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-188576
--- PASS: TestScheduledStopUnix (134.44s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (216.24s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.26.0.1446829788.exe start -p running-upgrade-452420 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E1005 20:45:52.162453  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/ingress-addon-legacy-544209/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.26.0.1446829788.exe start -p running-upgrade-452420 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m18.691764003s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-452420 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-452420 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m15.73693016s)
helpers_test.go:175: Cleaning up "running-upgrade-452420" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-452420
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-452420: (1.288649804s)
--- PASS: TestRunningBinaryUpgrade (216.24s)

                                                
                                    
x
+
TestKubernetesUpgrade (209.93s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-102411 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-102411 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m37.329895316s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-102411
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-102411: (2.11646222s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-102411 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-102411 status --format={{.Host}}: exit status 7 (66.27763ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-102411 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E1005 20:50:52.162490  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/ingress-addon-legacy-544209/client.crt: no such file or directory
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-102411 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m3.434640081s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-102411 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-102411 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-102411 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (109.231602ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-102411] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17363-196818/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-196818/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-102411
	    minikube start -p kubernetes-upgrade-102411 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1024112 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.2, by running:
	    
	    minikube start -p kubernetes-upgrade-102411 --kubernetes-version=v1.28.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-102411 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-102411 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (45.475444783s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-102411" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-102411
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-102411: (1.307822815s)
--- PASS: TestKubernetesUpgrade (209.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-408670 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-408670 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (78.932585ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-408670] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17363-196818/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-196818/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (122.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-408670 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-408670 --driver=kvm2  --container-runtime=containerd: (2m2.124183903s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-408670 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (122.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (56.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-408670 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-408670 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (55.249066361s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-408670 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-408670 status -o json: exit status 2 (300.984564ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-408670","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-408670
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (56.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (31.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-408670 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-408670 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (31.582031093s)
--- PASS: TestNoKubernetes/serial/Start (31.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-408670 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-408670 "sudo systemctl is-active --quiet service kubelet": exit status 1 (278.672767ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (2.275830105s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (2.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-408670
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-408670: (1.311606835s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (40.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-408670 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-408670 --driver=kvm2  --container-runtime=containerd: (40.917428157s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (40.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-909913 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-909913 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (111.286578ms)

                                                
                                                
-- stdout --
	* [false-909913] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17363-196818/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-196818/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1005 20:48:29.155080  227961 out.go:296] Setting OutFile to fd 1 ...
	I1005 20:48:29.155359  227961 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:48:29.155370  227961 out.go:309] Setting ErrFile to fd 2...
	I1005 20:48:29.155378  227961 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 20:48:29.155651  227961 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-196818/.minikube/bin
	I1005 20:48:29.156443  227961 out.go:303] Setting JSON to false
	I1005 20:48:29.157683  227961 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":23461,"bootTime":1696515448,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1005 20:48:29.157773  227961 start.go:138] virtualization: kvm guest
	I1005 20:48:29.160101  227961 out.go:177] * [false-909913] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1005 20:48:29.161646  227961 notify.go:220] Checking for updates...
	I1005 20:48:29.161648  227961 out.go:177]   - MINIKUBE_LOCATION=17363
	I1005 20:48:29.163177  227961 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 20:48:29.164498  227961 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17363-196818/kubeconfig
	I1005 20:48:29.165857  227961 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-196818/.minikube
	I1005 20:48:29.167160  227961 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1005 20:48:29.168566  227961 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1005 20:48:29.170688  227961 config.go:182] Loaded profile config "NoKubernetes-408670": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I1005 20:48:29.170827  227961 config.go:182] Loaded profile config "cert-expiration-285915": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
	I1005 20:48:29.170961  227961 config.go:182] Loaded profile config "cert-options-481341": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
	I1005 20:48:29.171085  227961 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 20:48:29.210049  227961 out.go:177] * Using the kvm2 driver based on user configuration
	I1005 20:48:29.211477  227961 start.go:298] selected driver: kvm2
	I1005 20:48:29.211492  227961 start.go:902] validating driver "kvm2" against <nil>
	I1005 20:48:29.211506  227961 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1005 20:48:29.213910  227961 out.go:177] 
	W1005 20:48:29.215319  227961 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1005 20:48:29.216714  227961 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-909913 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-909913

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-909913

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-909913

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-909913

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-909913

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-909913

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-909913

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-909913

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-909913

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-909913

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909913"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909913"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909913"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-909913

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909913"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909913"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-909913" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-909913" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-909913" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-909913" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-909913" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-909913" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-909913" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-909913" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909913"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909913"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909913"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909913"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909913"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-909913" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-909913" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-909913" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909913"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909913"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909913"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909913"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909913"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17363-196818/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 05 Oct 2023 20:48:09 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.50.214:8443
name: cert-expiration-285915
contexts:
- context:
cluster: cert-expiration-285915
extensions:
- extension:
last-update: Thu, 05 Oct 2023 20:48:09 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: cert-expiration-285915
name: cert-expiration-285915
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-285915
user:
client-certificate: /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/cert-expiration-285915/client.crt
client-key: /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/cert-expiration-285915/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-909913

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909913"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909913"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909913"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909913"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909913"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909913"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909913"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909913"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909913"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909913"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909913"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909913"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909913"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909913"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909913"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909913"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909913"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-909913"

                                                
                                                
----------------------- debugLogs end: false-909913 [took: 2.636431568s] --------------------------------
helpers_test.go:175: Cleaning up "false-909913" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-909913
--- PASS: TestNetworkPlugins/group/false (2.88s)

                                                
                                    
x
+
TestPause/serial/Start (110.24s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-190403 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-190403 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m50.244634324s)
--- PASS: TestPause/serial/Start (110.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-408670 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-408670 "sudo systemctl is-active --quiet service kubelet": exit status 1 (190.764763ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.62s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (148.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.26.0.1567136207.exe start -p stopped-upgrade-858441 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.26.0.1567136207.exe start -p stopped-upgrade-858441 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m23.183374479s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.26.0.1567136207.exe -p stopped-upgrade-858441 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.26.0.1567136207.exe -p stopped-upgrade-858441 stop: (2.159166394s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-858441 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:211: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-858441 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m3.615474197s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (148.96s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (8.4s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-190403 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-190403 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (8.385765422s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (8.40s)

                                                
                                    
x
+
TestPause/serial/Pause (0.79s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-190403 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.79s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.29s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-190403 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-190403 --output=json --layout=cluster: exit status 2 (292.690034ms)

                                                
                                                
-- stdout --
	{"Name":"pause-190403","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-190403","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.29s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.79s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-190403 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.79s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.94s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-190403 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.94s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.25s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-190403 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-190403 --alsologtostderr -v=5: (1.248056936s)
--- PASS: TestPause/serial/DeletePaused (1.25s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (19.66s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (19.657118633s)
--- PASS: TestPause/serial/VerifyDeletedResources (19.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (159.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-909913 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-909913 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (2m39.61714925s)
--- PASS: TestNetworkPlugins/group/auto/Start (159.62s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-858441
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-858441: (1.951889003s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (100.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-909913 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-909913 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m40.791827731s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (100.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (145.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-909913 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-909913 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (2m25.123497634s)
--- PASS: TestNetworkPlugins/group/calico/Start (145.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (149.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-909913 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
E1005 20:53:25.305869  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/functional-604028/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-909913 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (2m29.637110245s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (149.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-909913 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-909913 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hxl8p" [b56eb8c3-3e2b-490f-854a-a59d5e7cfaf9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-hxl8p" [b56eb8c3-3e2b-490f-854a-a59d5e7cfaf9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.012449319s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-54wtp" [84dcca6d-46ca-497d-b358-1419ee195996] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.036348798s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-909913 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-909913 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-s2rsw" [d06aaeb4-23da-4cd7-a3ca-8e6eca4a392c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-s2rsw" [d06aaeb4-23da-4cd7-a3ca-8e6eca4a392c] Running
E1005 20:53:55.210023  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/ingress-addon-legacy-544209/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.014134564s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-909913 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-909913 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-909913 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-909913 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-909913 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-909913 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (126.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-909913 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-909913 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (2m6.601090186s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (126.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (119.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-909913 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-909913 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m59.473536934s)
--- PASS: TestNetworkPlugins/group/flannel/Start (119.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-2jjxt" [6cc9a750-a4c9-49c8-8bdf-9c6b2c010883] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.028221099s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-909913 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (15.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-909913 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-x5dqq" [77837646-2b33-423b-a64e-6c36051e45f5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-x5dqq" [77837646-2b33-423b-a64e-6c36051e45f5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 15.017208853s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (15.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-909913 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-909913 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-909913 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-909913 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-909913 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-szzjd" [f1052cb7-4d9e-43db-b970-84ab70c78d6e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-szzjd" [f1052cb7-4d9e-43db-b970-84ab70c78d6e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.036153607s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (87.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-909913 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-909913 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m27.967985581s)
--- PASS: TestNetworkPlugins/group/bridge/Start (87.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-909913 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-909913 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-909913 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (135.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-287925 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
E1005 20:55:52.162561  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/ingress-addon-legacy-544209/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-287925 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m15.980760955s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (135.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-909913 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-909913 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-ngnvb" [e8609da8-001d-4e06-b998-d9b996634435] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-ngnvb" [e8609da8-001d-4e06-b998-d9b996634435] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.023068676s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-4xbpn" [c5ea537a-303e-48ad-b491-41d80762e7ea] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.03298768s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-909913 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-909913 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zw9hv" [47818c6e-8f2d-4140-8eeb-df7838b5abd1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-zw9hv" [47818c6e-8f2d-4140-8eeb-df7838b5abd1] Running
E1005 20:56:29.276223  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.013909524s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-909913 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-909913 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-909913 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-909913 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-909913 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-909913 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-909913 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-909913 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hp428" [edbe74e5-5630-47e1-9bc3-adbd4c6fff6c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-hp428" [edbe74e5-5630-47e1-9bc3-adbd4c6fff6c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.016443793s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (87.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-627374 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-627374 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.2: (1m27.915484125s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (87.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-909913 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (144.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-206863 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-206863 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.2: (2m24.736772582s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (144.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-909913 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-909913 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)
E1005 21:06:29.276010  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/client.crt: no such file or directory
E1005 21:06:41.885072  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/bridge-909913/client.crt: no such file or directory
E1005 21:06:42.453171  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/enable-default-cni-909913/client.crt: no such file or directory
E1005 21:06:45.621898  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/flannel-909913/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (117.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-331598 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-331598 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.2: (1m57.521691197s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (117.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-287925 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [00c82eba-eb48-4738-9ff3-45d21dd2b32d] Pending
helpers_test.go:344: "busybox" [00c82eba-eb48-4738-9ff3-45d21dd2b32d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [00c82eba-eb48-4738-9ff3-45d21dd2b32d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.061988873s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-287925 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-287925 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-287925 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.082417162s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-287925 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (91.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-287925 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-287925 --alsologtostderr -v=3: (1m31.798851649s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (91.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-627374 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [96fc1a1e-41fb-402a-ae0c-9019ca49dbd6] Pending
helpers_test.go:344: "busybox" [96fc1a1e-41fb-402a-ae0c-9019ca49dbd6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [96fc1a1e-41fb-402a-ae0c-9019ca49dbd6] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.293271847s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-627374 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-627374 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-627374 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.237265525s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-627374 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (92.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-627374 --alsologtostderr -v=3
E1005 20:58:25.306125  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/functional-604028/client.crt: no such file or directory
E1005 20:58:37.746953  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/auto-909913/client.crt: no such file or directory
E1005 20:58:37.752284  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/auto-909913/client.crt: no such file or directory
E1005 20:58:37.762598  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/auto-909913/client.crt: no such file or directory
E1005 20:58:37.782994  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/auto-909913/client.crt: no such file or directory
E1005 20:58:37.823377  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/auto-909913/client.crt: no such file or directory
E1005 20:58:37.903793  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/auto-909913/client.crt: no such file or directory
E1005 20:58:38.064009  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/auto-909913/client.crt: no such file or directory
E1005 20:58:38.384646  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/auto-909913/client.crt: no such file or directory
E1005 20:58:39.024855  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/auto-909913/client.crt: no such file or directory
E1005 20:58:40.305408  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/auto-909913/client.crt: no such file or directory
E1005 20:58:42.685676  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/kindnet-909913/client.crt: no such file or directory
E1005 20:58:42.690985  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/kindnet-909913/client.crt: no such file or directory
E1005 20:58:42.701281  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/kindnet-909913/client.crt: no such file or directory
E1005 20:58:42.721586  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/kindnet-909913/client.crt: no such file or directory
E1005 20:58:42.761924  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/kindnet-909913/client.crt: no such file or directory
E1005 20:58:42.842304  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/kindnet-909913/client.crt: no such file or directory
E1005 20:58:42.866632  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/auto-909913/client.crt: no such file or directory
E1005 20:58:43.003454  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/kindnet-909913/client.crt: no such file or directory
E1005 20:58:43.323968  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/kindnet-909913/client.crt: no such file or directory
E1005 20:58:43.964731  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/kindnet-909913/client.crt: no such file or directory
E1005 20:58:45.245336  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/kindnet-909913/client.crt: no such file or directory
E1005 20:58:47.806294  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/kindnet-909913/client.crt: no such file or directory
E1005 20:58:47.987831  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/auto-909913/client.crt: no such file or directory
E1005 20:58:52.927445  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/kindnet-909913/client.crt: no such file or directory
E1005 20:58:58.228775  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/auto-909913/client.crt: no such file or directory
E1005 20:59:03.168128  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/kindnet-909913/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-627374 --alsologtostderr -v=3: (1m32.318030039s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (92.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-331598 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5deb0367-51bb-4f7d-b8fe-e9383366283c] Pending
helpers_test.go:344: "busybox" [5deb0367-51bb-4f7d-b8fe-e9383366283c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5deb0367-51bb-4f7d-b8fe-e9383366283c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.045649548s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-331598 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-331598 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-331598 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.215283683s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-331598 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (92.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-331598 --alsologtostderr -v=3
E1005 20:59:18.709548  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/auto-909913/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-331598 --alsologtostderr -v=3: (1m32.131117808s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (92.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-206863 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3e351bd4-abe1-434a-93ab-7bf0185ef98f] Pending
helpers_test.go:344: "busybox" [3e351bd4-abe1-434a-93ab-7bf0185ef98f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3e351bd4-abe1-434a-93ab-7bf0185ef98f] Running
E1005 20:59:23.648607  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/kindnet-909913/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.048865856s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-206863 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-206863 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-206863 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.246045592s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-206863 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (92.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-206863 --alsologtostderr -v=3
E1005 20:59:32.120302  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/calico-909913/client.crt: no such file or directory
E1005 20:59:32.125673  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/calico-909913/client.crt: no such file or directory
E1005 20:59:32.136052  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/calico-909913/client.crt: no such file or directory
E1005 20:59:32.156505  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/calico-909913/client.crt: no such file or directory
E1005 20:59:32.196791  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/calico-909913/client.crt: no such file or directory
E1005 20:59:32.277234  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/calico-909913/client.crt: no such file or directory
E1005 20:59:32.437813  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/calico-909913/client.crt: no such file or directory
E1005 20:59:32.758697  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/calico-909913/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-206863 --alsologtostderr -v=3: (1m32.286376407s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (92.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-287925 -n old-k8s-version-287925
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-287925 -n old-k8s-version-287925: exit status 7 (68.955669ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-287925 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (469.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-287925 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
E1005 20:59:33.399473  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/calico-909913/client.crt: no such file or directory
E1005 20:59:34.680115  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/calico-909913/client.crt: no such file or directory
E1005 20:59:37.240359  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/calico-909913/client.crt: no such file or directory
E1005 20:59:42.361314  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/calico-909913/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-287925 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (7m49.453942129s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-287925 -n old-k8s-version-287925
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (469.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-627374 -n no-preload-627374
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-627374 -n no-preload-627374: exit status 7 (70.98853ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-627374 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (318.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-627374 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.2
E1005 20:59:52.601662  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/calico-909913/client.crt: no such file or directory
E1005 20:59:59.669811  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/auto-909913/client.crt: no such file or directory
E1005 21:00:03.909536  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/custom-flannel-909913/client.crt: no such file or directory
E1005 21:00:03.915034  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/custom-flannel-909913/client.crt: no such file or directory
E1005 21:00:03.925451  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/custom-flannel-909913/client.crt: no such file or directory
E1005 21:00:03.945898  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/custom-flannel-909913/client.crt: no such file or directory
E1005 21:00:03.986308  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/custom-flannel-909913/client.crt: no such file or directory
E1005 21:00:04.066782  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/custom-flannel-909913/client.crt: no such file or directory
E1005 21:00:04.227466  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/custom-flannel-909913/client.crt: no such file or directory
E1005 21:00:04.548181  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/custom-flannel-909913/client.crt: no such file or directory
E1005 21:00:04.609558  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/kindnet-909913/client.crt: no such file or directory
E1005 21:00:05.189157  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/custom-flannel-909913/client.crt: no such file or directory
E1005 21:00:06.469608  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/custom-flannel-909913/client.crt: no such file or directory
E1005 21:00:09.029768  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/custom-flannel-909913/client.crt: no such file or directory
E1005 21:00:13.082048  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/calico-909913/client.crt: no such file or directory
E1005 21:00:14.150072  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/custom-flannel-909913/client.crt: no such file or directory
E1005 21:00:24.391290  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/custom-flannel-909913/client.crt: no such file or directory
E1005 21:00:44.872151  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/custom-flannel-909913/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-627374 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.2: (5m18.289265548s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-627374 -n no-preload-627374
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (318.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-331598 -n default-k8s-diff-port-331598
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-331598 -n default-k8s-diff-port-331598: exit status 7 (60.055419ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-331598 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (317.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-331598 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.2
E1005 21:00:52.162308  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/ingress-addon-legacy-544209/client.crt: no such file or directory
E1005 21:00:54.042296  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/calico-909913/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-331598 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.2: (5m17.116883873s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-331598 -n default-k8s-diff-port-331598
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (317.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-206863 -n embed-certs-206863
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-206863 -n embed-certs-206863: exit status 7 (68.674029ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-206863 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (356.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-206863 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.2
E1005 21:01:12.325009  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/client.crt: no such file or directory
E1005 21:01:14.767153  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/enable-default-cni-909913/client.crt: no such file or directory
E1005 21:01:14.772482  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/enable-default-cni-909913/client.crt: no such file or directory
E1005 21:01:14.782801  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/enable-default-cni-909913/client.crt: no such file or directory
E1005 21:01:14.803156  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/enable-default-cni-909913/client.crt: no such file or directory
E1005 21:01:14.843522  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/enable-default-cni-909913/client.crt: no such file or directory
E1005 21:01:14.923917  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/enable-default-cni-909913/client.crt: no such file or directory
E1005 21:01:15.084347  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/enable-default-cni-909913/client.crt: no such file or directory
E1005 21:01:15.405102  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/enable-default-cni-909913/client.crt: no such file or directory
E1005 21:01:16.045734  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/enable-default-cni-909913/client.crt: no such file or directory
E1005 21:01:17.326882  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/enable-default-cni-909913/client.crt: no such file or directory
E1005 21:01:17.936185  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/flannel-909913/client.crt: no such file or directory
E1005 21:01:17.941326  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/flannel-909913/client.crt: no such file or directory
E1005 21:01:17.951676  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/flannel-909913/client.crt: no such file or directory
E1005 21:01:17.972122  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/flannel-909913/client.crt: no such file or directory
E1005 21:01:18.012528  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/flannel-909913/client.crt: no such file or directory
E1005 21:01:18.093374  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/flannel-909913/client.crt: no such file or directory
E1005 21:01:18.254418  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/flannel-909913/client.crt: no such file or directory
E1005 21:01:18.575300  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/flannel-909913/client.crt: no such file or directory
E1005 21:01:19.216244  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/flannel-909913/client.crt: no such file or directory
E1005 21:01:19.888106  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/enable-default-cni-909913/client.crt: no such file or directory
E1005 21:01:20.496454  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/flannel-909913/client.crt: no such file or directory
E1005 21:01:21.590406  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/auto-909913/client.crt: no such file or directory
E1005 21:01:23.057309  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/flannel-909913/client.crt: no such file or directory
E1005 21:01:25.009022  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/enable-default-cni-909913/client.crt: no such file or directory
E1005 21:01:25.833027  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/custom-flannel-909913/client.crt: no such file or directory
E1005 21:01:26.530606  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/kindnet-909913/client.crt: no such file or directory
E1005 21:01:28.178150  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/flannel-909913/client.crt: no such file or directory
E1005 21:01:29.276216  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/addons-127532/client.crt: no such file or directory
E1005 21:01:35.249830  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/enable-default-cni-909913/client.crt: no such file or directory
E1005 21:01:38.418386  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/flannel-909913/client.crt: no such file or directory
E1005 21:01:41.885254  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/bridge-909913/client.crt: no such file or directory
E1005 21:01:41.890675  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/bridge-909913/client.crt: no such file or directory
E1005 21:01:41.901026  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/bridge-909913/client.crt: no such file or directory
E1005 21:01:41.921372  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/bridge-909913/client.crt: no such file or directory
E1005 21:01:41.961735  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/bridge-909913/client.crt: no such file or directory
E1005 21:01:42.042684  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/bridge-909913/client.crt: no such file or directory
E1005 21:01:42.203437  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/bridge-909913/client.crt: no such file or directory
E1005 21:01:42.524070  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/bridge-909913/client.crt: no such file or directory
E1005 21:01:43.165186  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/bridge-909913/client.crt: no such file or directory
E1005 21:01:44.445982  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/bridge-909913/client.crt: no such file or directory
E1005 21:01:47.006778  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/bridge-909913/client.crt: no such file or directory
E1005 21:01:52.127993  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/bridge-909913/client.crt: no such file or directory
E1005 21:01:55.730544  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/enable-default-cni-909913/client.crt: no such file or directory
E1005 21:01:58.899674  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/flannel-909913/client.crt: no such file or directory
E1005 21:02:02.368822  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/bridge-909913/client.crt: no such file or directory
E1005 21:02:15.963503  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/calico-909913/client.crt: no such file or directory
E1005 21:02:22.849136  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/bridge-909913/client.crt: no such file or directory
E1005 21:02:36.691138  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/enable-default-cni-909913/client.crt: no such file or directory
E1005 21:02:39.860670  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/flannel-909913/client.crt: no such file or directory
E1005 21:02:47.753683  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/custom-flannel-909913/client.crt: no such file or directory
E1005 21:03:03.809857  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/bridge-909913/client.crt: no such file or directory
E1005 21:03:25.305413  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/functional-604028/client.crt: no such file or directory
E1005 21:03:37.746402  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/auto-909913/client.crt: no such file or directory
E1005 21:03:42.685882  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/kindnet-909913/client.crt: no such file or directory
E1005 21:03:58.612117  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/enable-default-cni-909913/client.crt: no such file or directory
E1005 21:04:01.781550  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/flannel-909913/client.crt: no such file or directory
E1005 21:04:05.430884  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/auto-909913/client.crt: no such file or directory
E1005 21:04:10.371606  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/kindnet-909913/client.crt: no such file or directory
E1005 21:04:25.730825  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/bridge-909913/client.crt: no such file or directory
E1005 21:04:32.120362  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/calico-909913/client.crt: no such file or directory
E1005 21:04:59.804672  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/calico-909913/client.crt: no such file or directory
E1005 21:05:03.909959  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/custom-flannel-909913/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-206863 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.2: (5m56.553244698s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-206863 -n embed-certs-206863
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (356.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-rtd29" [69b50a79-ce90-435e-8912-e2310a98eb76] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.020769218s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-rtd29" [69b50a79-ce90-435e-8912-e2310a98eb76] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01205804s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-627374 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-627374 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-627374 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-627374 -n no-preload-627374
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-627374 -n no-preload-627374: exit status 2 (248.494804ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-627374 -n no-preload-627374
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-627374 -n no-preload-627374: exit status 2 (254.502314ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-627374 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-627374 -n no-preload-627374
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-627374 -n no-preload-627374
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (91.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-619936 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.2
E1005 21:05:31.594319  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/custom-flannel-909913/client.crt: no such file or directory
E1005 21:05:52.162532  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/ingress-addon-legacy-544209/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-619936 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.2: (1m31.02683294s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (91.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-c7hpk" [5de5aecc-7f74-4315-b35a-0d308a615064] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.042587824s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-c7hpk" [5de5aecc-7f74-4315-b35a-0d308a615064] Running
E1005 21:06:14.766854  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/enable-default-cni-909913/client.crt: no such file or directory
E1005 21:06:17.936119  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/flannel-909913/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.087368191s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-331598 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-331598 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-331598 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-331598 --alsologtostderr -v=1: (1.109457629s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-331598 -n default-k8s-diff-port-331598
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-331598 -n default-k8s-diff-port-331598: exit status 2 (318.177219ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-331598 -n default-k8s-diff-port-331598
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-331598 -n default-k8s-diff-port-331598: exit status 2 (354.478965ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-331598 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-331598 -n default-k8s-diff-port-331598
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-331598 -n default-k8s-diff-port-331598
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-619936 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-619936 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.911618376s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-619936 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-619936 --alsologtostderr -v=3: (2.269538937s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (19.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-dp7gm" [3867f3b0-42f5-4706-9f6a-43b3a111b471] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-dp7gm" [3867f3b0-42f5-4706-9f6a-43b3a111b471] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 19.030212203s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (19.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-619936 -n newest-cni-619936
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-619936 -n newest-cni-619936: exit status 7 (77.892489ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-619936 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (51.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-619936 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.2
E1005 21:07:09.572054  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/bridge-909913/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-619936 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.2: (51.416757206s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-619936 -n newest-cni-619936
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (51.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-dp7gm" [3867f3b0-42f5-4706-9f6a-43b3a111b471] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.020368445s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-206863 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-206863 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-206863 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-206863 -n embed-certs-206863
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-206863 -n embed-certs-206863: exit status 2 (268.21685ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-206863 -n embed-certs-206863
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-206863 -n embed-certs-206863: exit status 2 (284.469425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-206863 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-206863 -n embed-certs-206863
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-206863 -n embed-certs-206863
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-8gxhs" [48181ee3-5491-42ab-85f3-8b9e19fa1f2b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.023751851s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-8gxhs" [48181ee3-5491-42ab-85f3-8b9e19fa1f2b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013533143s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-287925 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-287925 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-287925 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-287925 -n old-k8s-version-287925
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-287925 -n old-k8s-version-287925: exit status 2 (271.281099ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-287925 -n old-k8s-version-287925
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-287925 -n old-k8s-version-287925: exit status 2 (291.593027ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-287925 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-287925 -n old-k8s-version-287925
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-287925 -n old-k8s-version-287925
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-619936 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-619936 --alsologtostderr -v=1
E1005 21:07:51.876794  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/old-k8s-version-287925/client.crt: no such file or directory
E1005 21:07:51.882087  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/old-k8s-version-287925/client.crt: no such file or directory
E1005 21:07:51.892488  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/old-k8s-version-287925/client.crt: no such file or directory
E1005 21:07:51.913470  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/old-k8s-version-287925/client.crt: no such file or directory
E1005 21:07:51.954525  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/old-k8s-version-287925/client.crt: no such file or directory
E1005 21:07:52.035515  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/old-k8s-version-287925/client.crt: no such file or directory
E1005 21:07:52.196647  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/old-k8s-version-287925/client.crt: no such file or directory
E1005 21:07:52.517257  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/old-k8s-version-287925/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-619936 -n newest-cni-619936
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-619936 -n newest-cni-619936: exit status 2 (263.738746ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-619936 -n newest-cni-619936
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-619936 -n newest-cni-619936: exit status 2 (252.917314ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-619936 --alsologtostderr -v=1
E1005 21:07:53.158417  204004 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/old-k8s-version-287925/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-619936 -n newest-cni-619936
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-619936 -n newest-cni-619936
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.51s)

                                                
                                    

Test skip (36/305)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.28.2/cached-images 0
13 TestDownloadOnly/v1.28.2/binaries 0
14 TestDownloadOnly/v1.28.2/kubectl 0
18 TestDownloadOnlyKic 0
32 TestAddons/parallel/Olm 0
43 TestDockerFlags 0
46 TestDockerEnvContainerd 0
48 TestHyperKitDriverInstallOrUpdate 0
49 TestHyperkitDriverSkipUpgrade 0
100 TestFunctional/parallel/DockerEnv 0
101 TestFunctional/parallel/PodmanEnv 0
108 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
109 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
110 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
111 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
112 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
113 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
114 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
115 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
149 TestGvisorAddon 0
150 TestImageBuild 0
183 TestKicCustomNetwork 0
184 TestKicExistingNetwork 0
185 TestKicCustomSubnet 0
186 TestKicStaticIP 0
217 TestChangeNoneUser 0
220 TestScheduledStopWindows 0
222 TestSkaffold 0
224 TestInsufficientStorage 0
228 TestMissingContainerUpgrade 0
240 TestNetworkPlugins/group/kubenet 2.99
248 TestNetworkPlugins/group/cilium 3.07
257 TestStartStop/group/disable-driver-mounts 0.15
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:496: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-909913 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-909913

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-909913

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-909913

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-909913

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-909913

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-909913

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-909913

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-909913

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-909913

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-909913

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909913"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909913"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909913"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-909913

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909913"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909913"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-909913" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-909913" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-909913" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-909913" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-909913" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-909913" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-909913" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-909913" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909913"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909913"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909913"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909913"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909913"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-909913" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-909913" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-909913" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909913"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909913"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909913"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909913"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909913"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17363-196818/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 05 Oct 2023 20:48:09 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.50.214:8443
name: cert-expiration-285915
contexts:
- context:
cluster: cert-expiration-285915
extensions:
- extension:
last-update: Thu, 05 Oct 2023 20:48:09 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: cert-expiration-285915
name: cert-expiration-285915
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-285915
user:
client-certificate: /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/cert-expiration-285915/client.crt
client-key: /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/cert-expiration-285915/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-909913

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909913"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909913"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909913"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909913"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909913"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909913"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909913"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909913"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909913"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909913"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909913"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909913"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909913"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909913"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909913"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909913"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909913"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-909913"

                                                
                                                
----------------------- debugLogs end: kubenet-909913 [took: 2.841685508s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-909913" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-909913
--- SKIP: TestNetworkPlugins/group/kubenet (2.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-909913 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-909913

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-909913

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-909913

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-909913

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-909913

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-909913

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-909913

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-909913

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-909913

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-909913

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909913"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909913"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909913"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-909913

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909913"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909913"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-909913" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-909913" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-909913" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-909913" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-909913" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-909913" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-909913" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-909913" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909913"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909913"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909913"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909913"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909913"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-909913

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-909913

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-909913" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-909913" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-909913

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-909913

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-909913" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-909913" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-909913" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-909913" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-909913" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909913"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909913"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909913"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909913"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909913"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17363-196818/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 05 Oct 2023 20:48:09 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.50.214:8443
name: cert-expiration-285915
contexts:
- context:
cluster: cert-expiration-285915
extensions:
- extension:
last-update: Thu, 05 Oct 2023 20:48:09 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: cert-expiration-285915
name: cert-expiration-285915
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-285915
user:
client-certificate: /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/cert-expiration-285915/client.crt
client-key: /home/jenkins/minikube-integration/17363-196818/.minikube/profiles/cert-expiration-285915/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-909913

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909913"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909913"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909913"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909913"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909913"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909913"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909913"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909913"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909913"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909913"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909913"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909913"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909913"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909913"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909913"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909913"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909913"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-909913" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909913"

                                                
                                                
----------------------- debugLogs end: cilium-909913 [took: 2.92778793s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-909913" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-909913
--- SKIP: TestNetworkPlugins/group/cilium (3.07s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-428342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-428342
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard