Test Report: KVM_Linux_containerd 17786

                    
                      60db19ee13899525e0398a7e77320dad96a35a73:2024-03-19:33641
                    
                

Test fail (1/333)

Order failed test Duration
38 TestAddons/parallel/Registry 28.16
x
+
TestAddons/parallel/Registry (28.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 21.494186ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-j77v8" [704b949f-b38b-43ab-a839-0657d4adf742] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.095393181s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-qfpvm" [ec232402-c520-4e21-b409-309418205181] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.005979044s
addons_test.go:340: (dbg) Run:  kubectl --context addons-935788 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-935788 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-935788 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (14.005458078s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-935788 ip
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-935788 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-935788 addons disable registry --alsologtostderr -v=1: exit status 11 (482.14132ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 22:43:59.375213   16270 out.go:291] Setting OutFile to fd 1 ...
	I0318 22:43:59.375408   16270 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 22:43:59.375421   16270 out.go:304] Setting ErrFile to fd 2...
	I0318 22:43:59.375427   16270 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 22:43:59.375716   16270 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17786-6465/.minikube/bin
	I0318 22:43:59.376074   16270 mustload.go:65] Loading cluster: addons-935788
	I0318 22:43:59.376521   16270 config.go:182] Loaded profile config "addons-935788": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0318 22:43:59.376545   16270 addons.go:597] checking whether the cluster is paused
	I0318 22:43:59.376642   16270 config.go:182] Loaded profile config "addons-935788": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0318 22:43:59.376661   16270 host.go:66] Checking if "addons-935788" exists ...
	I0318 22:43:59.377004   16270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:43:59.377039   16270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:43:59.393132   16270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45983
	I0318 22:43:59.393549   16270 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:43:59.394204   16270 main.go:141] libmachine: Using API Version  1
	I0318 22:43:59.394236   16270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:43:59.394676   16270 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:43:59.394909   16270 main.go:141] libmachine: (addons-935788) Calling .GetState
	I0318 22:43:59.396678   16270 main.go:141] libmachine: (addons-935788) Calling .DriverName
	I0318 22:43:59.396926   16270 ssh_runner.go:195] Run: systemctl --version
	I0318 22:43:59.396954   16270 main.go:141] libmachine: (addons-935788) Calling .GetSSHHostname
	I0318 22:43:59.399343   16270 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:43:59.399795   16270 main.go:141] libmachine: (addons-935788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:69:18", ip: ""} in network mk-addons-935788: {Iface:virbr1 ExpiryTime:2024-03-18 23:40:17 +0000 UTC Type:0 Mac:52:54:00:f2:69:18 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:addons-935788 Clientid:01:52:54:00:f2:69:18}
	I0318 22:43:59.399832   16270 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined IP address 192.168.39.13 and MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:43:59.399969   16270 main.go:141] libmachine: (addons-935788) Calling .GetSSHPort
	I0318 22:43:59.400139   16270 main.go:141] libmachine: (addons-935788) Calling .GetSSHKeyPath
	I0318 22:43:59.400282   16270 main.go:141] libmachine: (addons-935788) Calling .GetSSHUsername
	I0318 22:43:59.400446   16270 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17786-6465/.minikube/machines/addons-935788/id_rsa Username:docker}
	I0318 22:43:59.513214   16270 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0318 22:43:59.513297   16270 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 22:43:59.643457   16270 cri.go:89] found id: "5e338e9119382d51d7cf5414062e49c17bc8f1327bb6a9d2084c10dff42ffb9a"
	I0318 22:43:59.643480   16270 cri.go:89] found id: "6b3faba73d4f931d4273b8f1a020e7f562bbd9302a41450652233514421590e5"
	I0318 22:43:59.643483   16270 cri.go:89] found id: "f0f37e4827a8c02d6f2f88b98c565885f338732948940c87880722ae0d7cbe28"
	I0318 22:43:59.643490   16270 cri.go:89] found id: "2ed22460502eed8d119fa2fd5971e366d59cfa7af659aa381e4a5d886db21254"
	I0318 22:43:59.643493   16270 cri.go:89] found id: "d57b4008b0d239e061ed257485f6ddad531a416886328560d9317a21c376a6f4"
	I0318 22:43:59.643501   16270 cri.go:89] found id: "755d6141f48efb04445e7849b9ae0685ea56abed04d2b49cd78b7b16173725d4"
	I0318 22:43:59.643504   16270 cri.go:89] found id: "5b5fa7b47701a6d8373931d273db3d951e2c37d53f24e4228b4aec30bc43273b"
	I0318 22:43:59.643506   16270 cri.go:89] found id: "b9959acfcea85cc18606c1ce2a015302ca605e446263bfd42fcc530a347b4e57"
	I0318 22:43:59.643508   16270 cri.go:89] found id: "12d626e074ade429feb5361401fff8d31750ea3a1d6ffe011869ae417b940f56"
	I0318 22:43:59.643518   16270 cri.go:89] found id: "7672cbdbc446000a2fcaee17bba5d146fd7c6eadc9b2d57c220bb13a7ef50495"
	I0318 22:43:59.643521   16270 cri.go:89] found id: "fdcd84f4c2818b0ad981d1c8828340c3c10fdc449bd63758395af72c77c4ea58"
	I0318 22:43:59.643524   16270 cri.go:89] found id: "b8037774af38a93022f559819a9faa13bfed5837e454bdab551f406d71246f00"
	I0318 22:43:59.643526   16270 cri.go:89] found id: "14a3183910aac85c0cd0168d504aa3241dfad29f1746d55f76b184cede3b6c04"
	I0318 22:43:59.643528   16270 cri.go:89] found id: "5b2b2c0f6b79f4bb28899b14beae63bc3342b12cb1a5fba63facbadf142b9793"
	I0318 22:43:59.643532   16270 cri.go:89] found id: "b2b06719c653d688ffb2092ff374bb9b0924e2c4179a217fd11141f5aa7a6928"
	I0318 22:43:59.643534   16270 cri.go:89] found id: "3fa9f07cb115a573fbd12e608693c75e5d844c355a081d105114529399cf04c2"
	I0318 22:43:59.643537   16270 cri.go:89] found id: "6563759498e7169a18ed9e35faab804ba6aa738c954ff2d1ff833213c54179d3"
	I0318 22:43:59.643543   16270 cri.go:89] found id: "66701875000917f3946ecc9cda1f877024e5374f4bd098d1ced23981a63a2d34"
	I0318 22:43:59.643546   16270 cri.go:89] found id: "ff8846e9d0768c500f7056ee945d1d20bfb02db0f95b59582cea53d1d10700ad"
	I0318 22:43:59.643548   16270 cri.go:89] found id: "907d446672f347746925f615baed539274a7dda031174df1afd39a4e98e93fe0"
	I0318 22:43:59.643551   16270 cri.go:89] found id: "851fe0b66ee614b81d0234b994f66f222ee11e7ab31b2f1743d257c721834003"
	I0318 22:43:59.643553   16270 cri.go:89] found id: "868a9521feab9433b650ec5d71e54ef73195056a0531e8612d77036a7d021805"
	I0318 22:43:59.643556   16270 cri.go:89] found id: ""
	I0318 22:43:59.643610   16270 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0318 22:43:59.787147   16270 main.go:141] libmachine: Making call to close driver server
	I0318 22:43:59.787172   16270 main.go:141] libmachine: (addons-935788) Calling .Close
	I0318 22:43:59.787469   16270 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:43:59.787493   16270 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:43:59.787500   16270 main.go:141] libmachine: (addons-935788) DBG | Closing plugin on server side
	I0318 22:43:59.789476   16270 out.go:177] 
	W0318 22:43:59.790562   16270 out.go:239] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-03-18T22:43:59Z" level=error msg="stat /run/containerd/runc/k8s.io/e6012745972725f2d9ccb746246e50ebed7cdf2f83d2fa947c0438ebac0e441f: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-03-18T22:43:59Z" level=error msg="stat /run/containerd/runc/k8s.io/e6012745972725f2d9ccb746246e50ebed7cdf2f83d2fa947c0438ebac0e441f: no such file or directory"
	
	W0318 22:43:59.790578   16270 out.go:239] * 
	* 
	W0318 22:43:59.792490   16270 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 22:43:59.793807   16270 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:390: failed to disable registry addon. args "out/minikube-linux-amd64 -p addons-935788 addons disable registry --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-935788 -n addons-935788
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-935788 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-935788 logs -n 25: (1.51228428s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-442829 | jenkins | v1.32.0 | 18 Mar 24 22:37 UTC |                     |
	|         | -p download-only-442829                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 18 Mar 24 22:38 UTC | 18 Mar 24 22:38 UTC |
	| delete  | -p download-only-442829                                                                     | download-only-442829 | jenkins | v1.32.0 | 18 Mar 24 22:38 UTC | 18 Mar 24 22:38 UTC |
	| start   | -o=json --download-only                                                                     | download-only-158809 | jenkins | v1.32.0 | 18 Mar 24 22:38 UTC |                     |
	|         | -p download-only-158809                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 18 Mar 24 22:38 UTC | 18 Mar 24 22:38 UTC |
	| delete  | -p download-only-158809                                                                     | download-only-158809 | jenkins | v1.32.0 | 18 Mar 24 22:38 UTC | 18 Mar 24 22:38 UTC |
	| start   | -o=json --download-only                                                                     | download-only-497199 | jenkins | v1.32.0 | 18 Mar 24 22:38 UTC |                     |
	|         | -p download-only-497199                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                                                         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 18 Mar 24 22:39 UTC | 18 Mar 24 22:39 UTC |
	| delete  | -p download-only-497199                                                                     | download-only-497199 | jenkins | v1.32.0 | 18 Mar 24 22:39 UTC | 18 Mar 24 22:39 UTC |
	| delete  | -p download-only-442829                                                                     | download-only-442829 | jenkins | v1.32.0 | 18 Mar 24 22:40 UTC | 18 Mar 24 22:40 UTC |
	| delete  | -p download-only-158809                                                                     | download-only-158809 | jenkins | v1.32.0 | 18 Mar 24 22:40 UTC | 18 Mar 24 22:40 UTC |
	| delete  | -p download-only-497199                                                                     | download-only-497199 | jenkins | v1.32.0 | 18 Mar 24 22:40 UTC | 18 Mar 24 22:40 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-728007 | jenkins | v1.32.0 | 18 Mar 24 22:40 UTC |                     |
	|         | binary-mirror-728007                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:44837                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-728007                                                                     | binary-mirror-728007 | jenkins | v1.32.0 | 18 Mar 24 22:40 UTC | 18 Mar 24 22:40 UTC |
	| addons  | disable dashboard -p                                                                        | addons-935788        | jenkins | v1.32.0 | 18 Mar 24 22:40 UTC |                     |
	|         | addons-935788                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-935788        | jenkins | v1.32.0 | 18 Mar 24 22:40 UTC |                     |
	|         | addons-935788                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-935788 --wait=true                                                                | addons-935788        | jenkins | v1.32.0 | 18 Mar 24 22:40 UTC | 18 Mar 24 22:43 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-935788 addons disable                                                                | addons-935788        | jenkins | v1.32.0 | 18 Mar 24 22:43 UTC | 18 Mar 24 22:43 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-935788 ssh cat                                                                       | addons-935788        | jenkins | v1.32.0 | 18 Mar 24 22:43 UTC | 18 Mar 24 22:43 UTC |
	|         | /opt/local-path-provisioner/pvc-dc5ffa50-03c1-493c-ae58-8e564d4e9229_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-935788 addons disable                                                                | addons-935788        | jenkins | v1.32.0 | 18 Mar 24 22:43 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-935788        | jenkins | v1.32.0 | 18 Mar 24 22:43 UTC |                     |
	|         | addons-935788                                                                               |                      |         |         |                     |                     |
	| ip      | addons-935788 ip                                                                            | addons-935788        | jenkins | v1.32.0 | 18 Mar 24 22:43 UTC | 18 Mar 24 22:43 UTC |
	| addons  | addons-935788 addons disable                                                                | addons-935788        | jenkins | v1.32.0 | 18 Mar 24 22:43 UTC |                     |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 22:40:01
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 22:40:01.016155   14794 out.go:291] Setting OutFile to fd 1 ...
	I0318 22:40:01.016426   14794 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 22:40:01.016437   14794 out.go:304] Setting ErrFile to fd 2...
	I0318 22:40:01.016444   14794 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 22:40:01.016645   14794 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17786-6465/.minikube/bin
	I0318 22:40:01.017215   14794 out.go:298] Setting JSON to false
	I0318 22:40:01.018031   14794 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1344,"bootTime":1710800257,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 22:40:01.018084   14794 start.go:139] virtualization: kvm guest
	I0318 22:40:01.020094   14794 out.go:177] * [addons-935788] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 22:40:01.021833   14794 out.go:177]   - MINIKUBE_LOCATION=17786
	I0318 22:40:01.021859   14794 notify.go:220] Checking for updates...
	I0318 22:40:01.024194   14794 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 22:40:01.025449   14794 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17786-6465/kubeconfig
	I0318 22:40:01.026898   14794 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17786-6465/.minikube
	I0318 22:40:01.028224   14794 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 22:40:01.029973   14794 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 22:40:01.031422   14794 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 22:40:01.061134   14794 out.go:177] * Using the kvm2 driver based on user configuration
	I0318 22:40:01.062305   14794 start.go:297] selected driver: kvm2
	I0318 22:40:01.062317   14794 start.go:901] validating driver "kvm2" against <nil>
	I0318 22:40:01.062326   14794 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 22:40:01.062973   14794 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 22:40:01.063028   14794 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17786-6465/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 22:40:01.076210   14794 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 22:40:01.076257   14794 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 22:40:01.076455   14794 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 22:40:01.076505   14794 cni.go:84] Creating CNI manager for ""
	I0318 22:40:01.076517   14794 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0318 22:40:01.076526   14794 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 22:40:01.076565   14794 start.go:340] cluster config:
	{Name:addons-935788 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-935788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
ontainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 22:40:01.076654   14794 iso.go:125] acquiring lock: {Name:mk80345eb1a53e1b6e30e36ffde20e6b42fffb9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 22:40:01.078165   14794 out.go:177] * Starting "addons-935788" primary control-plane node in "addons-935788" cluster
	I0318 22:40:01.079356   14794 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0318 22:40:01.079381   14794 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17786-6465/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4
	I0318 22:40:01.079392   14794 cache.go:56] Caching tarball of preloaded images
	I0318 22:40:01.079465   14794 preload.go:173] Found /home/jenkins/minikube-integration/17786-6465/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 22:40:01.079476   14794 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on containerd
	I0318 22:40:01.079749   14794 profile.go:142] Saving config to /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/config.json ...
	I0318 22:40:01.079769   14794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/config.json: {Name:mk16e1dcf7eafb71fba7c15a037a78fa8316bce9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:40:01.079883   14794 start.go:360] acquireMachinesLock for addons-935788: {Name:mk93a71e8ced81fce03b99ec67ec46829236cc68 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 22:40:01.079928   14794 start.go:364] duration metric: took 31.985µs to acquireMachinesLock for "addons-935788"
	I0318 22:40:01.079946   14794 start.go:93] Provisioning new machine with config: &{Name:addons-935788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:addons-935788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0318 22:40:01.079992   14794 start.go:125] createHost starting for "" (driver="kvm2")
	I0318 22:40:01.081392   14794 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0318 22:40:01.081495   14794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:40:01.081527   14794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:40:01.093656   14794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40463
	I0318 22:40:01.094017   14794 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:40:01.094522   14794 main.go:141] libmachine: Using API Version  1
	I0318 22:40:01.094542   14794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:40:01.094862   14794 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:40:01.095052   14794 main.go:141] libmachine: (addons-935788) Calling .GetMachineName
	I0318 22:40:01.095192   14794 main.go:141] libmachine: (addons-935788) Calling .DriverName
	I0318 22:40:01.095337   14794 start.go:159] libmachine.API.Create for "addons-935788" (driver="kvm2")
	I0318 22:40:01.095365   14794 client.go:168] LocalClient.Create starting
	I0318 22:40:01.095404   14794 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17786-6465/.minikube/certs/ca.pem
	I0318 22:40:01.202641   14794 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17786-6465/.minikube/certs/cert.pem
	I0318 22:40:01.358512   14794 main.go:141] libmachine: Running pre-create checks...
	I0318 22:40:01.358535   14794 main.go:141] libmachine: (addons-935788) Calling .PreCreateCheck
	I0318 22:40:01.358977   14794 main.go:141] libmachine: (addons-935788) Calling .GetConfigRaw
	I0318 22:40:01.359377   14794 main.go:141] libmachine: Creating machine...
	I0318 22:40:01.359391   14794 main.go:141] libmachine: (addons-935788) Calling .Create
	I0318 22:40:01.359508   14794 main.go:141] libmachine: (addons-935788) Creating KVM machine...
	I0318 22:40:01.360693   14794 main.go:141] libmachine: (addons-935788) DBG | found existing default KVM network
	I0318 22:40:01.361424   14794 main.go:141] libmachine: (addons-935788) DBG | I0318 22:40:01.361316   14816 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0318 22:40:01.361457   14794 main.go:141] libmachine: (addons-935788) DBG | created network xml: 
	I0318 22:40:01.361469   14794 main.go:141] libmachine: (addons-935788) DBG | <network>
	I0318 22:40:01.361477   14794 main.go:141] libmachine: (addons-935788) DBG |   <name>mk-addons-935788</name>
	I0318 22:40:01.361486   14794 main.go:141] libmachine: (addons-935788) DBG |   <dns enable='no'/>
	I0318 22:40:01.361529   14794 main.go:141] libmachine: (addons-935788) DBG |   
	I0318 22:40:01.361571   14794 main.go:141] libmachine: (addons-935788) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0318 22:40:01.361588   14794 main.go:141] libmachine: (addons-935788) DBG |     <dhcp>
	I0318 22:40:01.361598   14794 main.go:141] libmachine: (addons-935788) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0318 22:40:01.361611   14794 main.go:141] libmachine: (addons-935788) DBG |     </dhcp>
	I0318 22:40:01.361622   14794 main.go:141] libmachine: (addons-935788) DBG |   </ip>
	I0318 22:40:01.361627   14794 main.go:141] libmachine: (addons-935788) DBG |   
	I0318 22:40:01.361635   14794 main.go:141] libmachine: (addons-935788) DBG | </network>
	I0318 22:40:01.361642   14794 main.go:141] libmachine: (addons-935788) DBG | 
	I0318 22:40:01.366691   14794 main.go:141] libmachine: (addons-935788) DBG | trying to create private KVM network mk-addons-935788 192.168.39.0/24...
	I0318 22:40:01.426351   14794 main.go:141] libmachine: (addons-935788) DBG | private KVM network mk-addons-935788 192.168.39.0/24 created
	I0318 22:40:01.426443   14794 main.go:141] libmachine: (addons-935788) DBG | I0318 22:40:01.426328   14816 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17786-6465/.minikube
	I0318 22:40:01.426492   14794 main.go:141] libmachine: (addons-935788) Setting up store path in /home/jenkins/minikube-integration/17786-6465/.minikube/machines/addons-935788 ...
	I0318 22:40:01.426516   14794 main.go:141] libmachine: (addons-935788) Building disk image from file:///home/jenkins/minikube-integration/17786-6465/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso
	I0318 22:40:01.426543   14794 main.go:141] libmachine: (addons-935788) Downloading /home/jenkins/minikube-integration/17786-6465/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17786-6465/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso...
	I0318 22:40:01.694657   14794 main.go:141] libmachine: (addons-935788) DBG | I0318 22:40:01.694512   14816 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17786-6465/.minikube/machines/addons-935788/id_rsa...
	I0318 22:40:01.771365   14794 main.go:141] libmachine: (addons-935788) DBG | I0318 22:40:01.771262   14816 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17786-6465/.minikube/machines/addons-935788/addons-935788.rawdisk...
	I0318 22:40:01.771399   14794 main.go:141] libmachine: (addons-935788) DBG | Writing magic tar header
	I0318 22:40:01.771414   14794 main.go:141] libmachine: (addons-935788) DBG | Writing SSH key tar header
	I0318 22:40:01.771426   14794 main.go:141] libmachine: (addons-935788) DBG | I0318 22:40:01.771377   14816 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17786-6465/.minikube/machines/addons-935788 ...
	I0318 22:40:01.771476   14794 main.go:141] libmachine: (addons-935788) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17786-6465/.minikube/machines/addons-935788
	I0318 22:40:01.771504   14794 main.go:141] libmachine: (addons-935788) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17786-6465/.minikube/machines
	I0318 22:40:01.771517   14794 main.go:141] libmachine: (addons-935788) Setting executable bit set on /home/jenkins/minikube-integration/17786-6465/.minikube/machines/addons-935788 (perms=drwx------)
	I0318 22:40:01.771531   14794 main.go:141] libmachine: (addons-935788) Setting executable bit set on /home/jenkins/minikube-integration/17786-6465/.minikube/machines (perms=drwxr-xr-x)
	I0318 22:40:01.771543   14794 main.go:141] libmachine: (addons-935788) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17786-6465/.minikube
	I0318 22:40:01.771558   14794 main.go:141] libmachine: (addons-935788) Setting executable bit set on /home/jenkins/minikube-integration/17786-6465/.minikube (perms=drwxr-xr-x)
	I0318 22:40:01.771572   14794 main.go:141] libmachine: (addons-935788) Setting executable bit set on /home/jenkins/minikube-integration/17786-6465 (perms=drwxrwxr-x)
	I0318 22:40:01.771587   14794 main.go:141] libmachine: (addons-935788) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17786-6465
	I0318 22:40:01.771599   14794 main.go:141] libmachine: (addons-935788) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0318 22:40:01.771615   14794 main.go:141] libmachine: (addons-935788) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0318 22:40:01.771624   14794 main.go:141] libmachine: (addons-935788) Creating domain...
	I0318 22:40:01.771656   14794 main.go:141] libmachine: (addons-935788) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0318 22:40:01.771676   14794 main.go:141] libmachine: (addons-935788) DBG | Checking permissions on dir: /home/jenkins
	I0318 22:40:01.771683   14794 main.go:141] libmachine: (addons-935788) DBG | Checking permissions on dir: /home
	I0318 22:40:01.771693   14794 main.go:141] libmachine: (addons-935788) DBG | Skipping /home - not owner
	I0318 22:40:01.772565   14794 main.go:141] libmachine: (addons-935788) define libvirt domain using xml: 
	I0318 22:40:01.772586   14794 main.go:141] libmachine: (addons-935788) <domain type='kvm'>
	I0318 22:40:01.772595   14794 main.go:141] libmachine: (addons-935788)   <name>addons-935788</name>
	I0318 22:40:01.772603   14794 main.go:141] libmachine: (addons-935788)   <memory unit='MiB'>4000</memory>
	I0318 22:40:01.772615   14794 main.go:141] libmachine: (addons-935788)   <vcpu>2</vcpu>
	I0318 22:40:01.772626   14794 main.go:141] libmachine: (addons-935788)   <features>
	I0318 22:40:01.772635   14794 main.go:141] libmachine: (addons-935788)     <acpi/>
	I0318 22:40:01.772645   14794 main.go:141] libmachine: (addons-935788)     <apic/>
	I0318 22:40:01.772651   14794 main.go:141] libmachine: (addons-935788)     <pae/>
	I0318 22:40:01.772656   14794 main.go:141] libmachine: (addons-935788)     
	I0318 22:40:01.772661   14794 main.go:141] libmachine: (addons-935788)   </features>
	I0318 22:40:01.772668   14794 main.go:141] libmachine: (addons-935788)   <cpu mode='host-passthrough'>
	I0318 22:40:01.772695   14794 main.go:141] libmachine: (addons-935788)   
	I0318 22:40:01.772724   14794 main.go:141] libmachine: (addons-935788)   </cpu>
	I0318 22:40:01.772739   14794 main.go:141] libmachine: (addons-935788)   <os>
	I0318 22:40:01.772754   14794 main.go:141] libmachine: (addons-935788)     <type>hvm</type>
	I0318 22:40:01.772776   14794 main.go:141] libmachine: (addons-935788)     <boot dev='cdrom'/>
	I0318 22:40:01.772796   14794 main.go:141] libmachine: (addons-935788)     <boot dev='hd'/>
	I0318 22:40:01.772810   14794 main.go:141] libmachine: (addons-935788)     <bootmenu enable='no'/>
	I0318 22:40:01.772821   14794 main.go:141] libmachine: (addons-935788)   </os>
	I0318 22:40:01.772831   14794 main.go:141] libmachine: (addons-935788)   <devices>
	I0318 22:40:01.772843   14794 main.go:141] libmachine: (addons-935788)     <disk type='file' device='cdrom'>
	I0318 22:40:01.772861   14794 main.go:141] libmachine: (addons-935788)       <source file='/home/jenkins/minikube-integration/17786-6465/.minikube/machines/addons-935788/boot2docker.iso'/>
	I0318 22:40:01.772877   14794 main.go:141] libmachine: (addons-935788)       <target dev='hdc' bus='scsi'/>
	I0318 22:40:01.772892   14794 main.go:141] libmachine: (addons-935788)       <readonly/>
	I0318 22:40:01.772903   14794 main.go:141] libmachine: (addons-935788)     </disk>
	I0318 22:40:01.772918   14794 main.go:141] libmachine: (addons-935788)     <disk type='file' device='disk'>
	I0318 22:40:01.772931   14794 main.go:141] libmachine: (addons-935788)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0318 22:40:01.772954   14794 main.go:141] libmachine: (addons-935788)       <source file='/home/jenkins/minikube-integration/17786-6465/.minikube/machines/addons-935788/addons-935788.rawdisk'/>
	I0318 22:40:01.772971   14794 main.go:141] libmachine: (addons-935788)       <target dev='hda' bus='virtio'/>
	I0318 22:40:01.772983   14794 main.go:141] libmachine: (addons-935788)     </disk>
	I0318 22:40:01.772995   14794 main.go:141] libmachine: (addons-935788)     <interface type='network'>
	I0318 22:40:01.773009   14794 main.go:141] libmachine: (addons-935788)       <source network='mk-addons-935788'/>
	I0318 22:40:01.773020   14794 main.go:141] libmachine: (addons-935788)       <model type='virtio'/>
	I0318 22:40:01.773032   14794 main.go:141] libmachine: (addons-935788)     </interface>
	I0318 22:40:01.773049   14794 main.go:141] libmachine: (addons-935788)     <interface type='network'>
	I0318 22:40:01.773065   14794 main.go:141] libmachine: (addons-935788)       <source network='default'/>
	I0318 22:40:01.773076   14794 main.go:141] libmachine: (addons-935788)       <model type='virtio'/>
	I0318 22:40:01.773087   14794 main.go:141] libmachine: (addons-935788)     </interface>
	I0318 22:40:01.773098   14794 main.go:141] libmachine: (addons-935788)     <serial type='pty'>
	I0318 22:40:01.773107   14794 main.go:141] libmachine: (addons-935788)       <target port='0'/>
	I0318 22:40:01.773122   14794 main.go:141] libmachine: (addons-935788)     </serial>
	I0318 22:40:01.773135   14794 main.go:141] libmachine: (addons-935788)     <console type='pty'>
	I0318 22:40:01.773147   14794 main.go:141] libmachine: (addons-935788)       <target type='serial' port='0'/>
	I0318 22:40:01.773160   14794 main.go:141] libmachine: (addons-935788)     </console>
	I0318 22:40:01.773172   14794 main.go:141] libmachine: (addons-935788)     <rng model='virtio'>
	I0318 22:40:01.773183   14794 main.go:141] libmachine: (addons-935788)       <backend model='random'>/dev/random</backend>
	I0318 22:40:01.773198   14794 main.go:141] libmachine: (addons-935788)     </rng>
	I0318 22:40:01.773210   14794 main.go:141] libmachine: (addons-935788)     
	I0318 22:40:01.773220   14794 main.go:141] libmachine: (addons-935788)     
	I0318 22:40:01.773229   14794 main.go:141] libmachine: (addons-935788)   </devices>
	I0318 22:40:01.773240   14794 main.go:141] libmachine: (addons-935788) </domain>
	I0318 22:40:01.773251   14794 main.go:141] libmachine: (addons-935788) 
	I0318 22:40:01.778618   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:52:cd:10 in network default
	I0318 22:40:01.779112   14794 main.go:141] libmachine: (addons-935788) Ensuring networks are active...
	I0318 22:40:01.779130   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:01.779695   14794 main.go:141] libmachine: (addons-935788) Ensuring network default is active
	I0318 22:40:01.780033   14794 main.go:141] libmachine: (addons-935788) Ensuring network mk-addons-935788 is active
	I0318 22:40:01.780533   14794 main.go:141] libmachine: (addons-935788) Getting domain xml...
	I0318 22:40:01.781136   14794 main.go:141] libmachine: (addons-935788) Creating domain...
	I0318 22:40:03.111936   14794 main.go:141] libmachine: (addons-935788) Waiting to get IP...
	I0318 22:40:03.112650   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:03.113034   14794 main.go:141] libmachine: (addons-935788) DBG | unable to find current IP address of domain addons-935788 in network mk-addons-935788
	I0318 22:40:03.113058   14794 main.go:141] libmachine: (addons-935788) DBG | I0318 22:40:03.113002   14816 retry.go:31] will retry after 294.461498ms: waiting for machine to come up
	I0318 22:40:03.409348   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:03.409744   14794 main.go:141] libmachine: (addons-935788) DBG | unable to find current IP address of domain addons-935788 in network mk-addons-935788
	I0318 22:40:03.409773   14794 main.go:141] libmachine: (addons-935788) DBG | I0318 22:40:03.409694   14816 retry.go:31] will retry after 247.065025ms: waiting for machine to come up
	I0318 22:40:03.658017   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:03.658390   14794 main.go:141] libmachine: (addons-935788) DBG | unable to find current IP address of domain addons-935788 in network mk-addons-935788
	I0318 22:40:03.658410   14794 main.go:141] libmachine: (addons-935788) DBG | I0318 22:40:03.658350   14816 retry.go:31] will retry after 383.524476ms: waiting for machine to come up
	I0318 22:40:04.043795   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:04.044227   14794 main.go:141] libmachine: (addons-935788) DBG | unable to find current IP address of domain addons-935788 in network mk-addons-935788
	I0318 22:40:04.044257   14794 main.go:141] libmachine: (addons-935788) DBG | I0318 22:40:04.044181   14816 retry.go:31] will retry after 577.116193ms: waiting for machine to come up
	I0318 22:40:04.622405   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:04.622819   14794 main.go:141] libmachine: (addons-935788) DBG | unable to find current IP address of domain addons-935788 in network mk-addons-935788
	I0318 22:40:04.622846   14794 main.go:141] libmachine: (addons-935788) DBG | I0318 22:40:04.622764   14816 retry.go:31] will retry after 703.354882ms: waiting for machine to come up
	I0318 22:40:05.327467   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:05.327930   14794 main.go:141] libmachine: (addons-935788) DBG | unable to find current IP address of domain addons-935788 in network mk-addons-935788
	I0318 22:40:05.327961   14794 main.go:141] libmachine: (addons-935788) DBG | I0318 22:40:05.327862   14816 retry.go:31] will retry after 762.95038ms: waiting for machine to come up
	I0318 22:40:06.092667   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:06.093019   14794 main.go:141] libmachine: (addons-935788) DBG | unable to find current IP address of domain addons-935788 in network mk-addons-935788
	I0318 22:40:06.093048   14794 main.go:141] libmachine: (addons-935788) DBG | I0318 22:40:06.092970   14816 retry.go:31] will retry after 868.973529ms: waiting for machine to come up
	I0318 22:40:06.963096   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:06.963484   14794 main.go:141] libmachine: (addons-935788) DBG | unable to find current IP address of domain addons-935788 in network mk-addons-935788
	I0318 22:40:06.963505   14794 main.go:141] libmachine: (addons-935788) DBG | I0318 22:40:06.963438   14816 retry.go:31] will retry after 1.36331837s: waiting for machine to come up
	I0318 22:40:08.328862   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:08.329168   14794 main.go:141] libmachine: (addons-935788) DBG | unable to find current IP address of domain addons-935788 in network mk-addons-935788
	I0318 22:40:08.329197   14794 main.go:141] libmachine: (addons-935788) DBG | I0318 22:40:08.329116   14816 retry.go:31] will retry after 1.795758203s: waiting for machine to come up
	I0318 22:40:10.126421   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:10.126846   14794 main.go:141] libmachine: (addons-935788) DBG | unable to find current IP address of domain addons-935788 in network mk-addons-935788
	I0318 22:40:10.126877   14794 main.go:141] libmachine: (addons-935788) DBG | I0318 22:40:10.126793   14816 retry.go:31] will retry after 1.449250041s: waiting for machine to come up
	I0318 22:40:11.577234   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:11.577698   14794 main.go:141] libmachine: (addons-935788) DBG | unable to find current IP address of domain addons-935788 in network mk-addons-935788
	I0318 22:40:11.577736   14794 main.go:141] libmachine: (addons-935788) DBG | I0318 22:40:11.577669   14816 retry.go:31] will retry after 1.907613001s: waiting for machine to come up
	I0318 22:40:13.487716   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:13.488214   14794 main.go:141] libmachine: (addons-935788) DBG | unable to find current IP address of domain addons-935788 in network mk-addons-935788
	I0318 22:40:13.488237   14794 main.go:141] libmachine: (addons-935788) DBG | I0318 22:40:13.488165   14816 retry.go:31] will retry after 2.321228548s: waiting for machine to come up
	I0318 22:40:15.810559   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:15.810993   14794 main.go:141] libmachine: (addons-935788) DBG | unable to find current IP address of domain addons-935788 in network mk-addons-935788
	I0318 22:40:15.811032   14794 main.go:141] libmachine: (addons-935788) DBG | I0318 22:40:15.810951   14816 retry.go:31] will retry after 3.500798472s: waiting for machine to come up
	I0318 22:40:19.315857   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:19.316252   14794 main.go:141] libmachine: (addons-935788) DBG | unable to find current IP address of domain addons-935788 in network mk-addons-935788
	I0318 22:40:19.316280   14794 main.go:141] libmachine: (addons-935788) DBG | I0318 22:40:19.316220   14816 retry.go:31] will retry after 4.345528306s: waiting for machine to come up
	I0318 22:40:23.664293   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:23.664734   14794 main.go:141] libmachine: (addons-935788) Found IP for machine: 192.168.39.13
	I0318 22:40:23.664762   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has current primary IP address 192.168.39.13 and MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:23.664772   14794 main.go:141] libmachine: (addons-935788) Reserving static IP address...
	I0318 22:40:23.665035   14794 main.go:141] libmachine: (addons-935788) DBG | unable to find host DHCP lease matching {name: "addons-935788", mac: "52:54:00:f2:69:18", ip: "192.168.39.13"} in network mk-addons-935788
	I0318 22:40:23.730911   14794 main.go:141] libmachine: (addons-935788) Reserved static IP address: 192.168.39.13
	I0318 22:40:23.730942   14794 main.go:141] libmachine: (addons-935788) DBG | Getting to WaitForSSH function...
	I0318 22:40:23.730950   14794 main.go:141] libmachine: (addons-935788) Waiting for SSH to be available...
	I0318 22:40:23.733121   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:23.733471   14794 main.go:141] libmachine: (addons-935788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:69:18", ip: ""} in network mk-addons-935788: {Iface:virbr1 ExpiryTime:2024-03-18 23:40:17 +0000 UTC Type:0 Mac:52:54:00:f2:69:18 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f2:69:18}
	I0318 22:40:23.733506   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined IP address 192.168.39.13 and MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:23.733631   14794 main.go:141] libmachine: (addons-935788) DBG | Using SSH client type: external
	I0318 22:40:23.733652   14794 main.go:141] libmachine: (addons-935788) DBG | Using SSH private key: /home/jenkins/minikube-integration/17786-6465/.minikube/machines/addons-935788/id_rsa (-rw-------)
	I0318 22:40:23.733679   14794 main.go:141] libmachine: (addons-935788) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.13 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17786-6465/.minikube/machines/addons-935788/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 22:40:23.733693   14794 main.go:141] libmachine: (addons-935788) DBG | About to run SSH command:
	I0318 22:40:23.733708   14794 main.go:141] libmachine: (addons-935788) DBG | exit 0
	I0318 22:40:23.867920   14794 main.go:141] libmachine: (addons-935788) DBG | SSH cmd err, output: <nil>: 
	I0318 22:40:23.868150   14794 main.go:141] libmachine: (addons-935788) KVM machine creation complete!
	I0318 22:40:23.868444   14794 main.go:141] libmachine: (addons-935788) Calling .GetConfigRaw
	I0318 22:40:23.868995   14794 main.go:141] libmachine: (addons-935788) Calling .DriverName
	I0318 22:40:23.869180   14794 main.go:141] libmachine: (addons-935788) Calling .DriverName
	I0318 22:40:23.869326   14794 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0318 22:40:23.869343   14794 main.go:141] libmachine: (addons-935788) Calling .GetState
	I0318 22:40:23.870289   14794 main.go:141] libmachine: Detecting operating system of created instance...
	I0318 22:40:23.870303   14794 main.go:141] libmachine: Waiting for SSH to be available...
	I0318 22:40:23.870310   14794 main.go:141] libmachine: Getting to WaitForSSH function...
	I0318 22:40:23.870319   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHHostname
	I0318 22:40:23.872218   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:23.872628   14794 main.go:141] libmachine: (addons-935788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:69:18", ip: ""} in network mk-addons-935788: {Iface:virbr1 ExpiryTime:2024-03-18 23:40:17 +0000 UTC Type:0 Mac:52:54:00:f2:69:18 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:addons-935788 Clientid:01:52:54:00:f2:69:18}
	I0318 22:40:23.872657   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined IP address 192.168.39.13 and MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:23.872803   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHPort
	I0318 22:40:23.872981   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHKeyPath
	I0318 22:40:23.873122   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHKeyPath
	I0318 22:40:23.873262   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHUsername
	I0318 22:40:23.873422   14794 main.go:141] libmachine: Using SSH client type: native
	I0318 22:40:23.873617   14794 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I0318 22:40:23.873632   14794 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0318 22:40:23.987474   14794 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 22:40:23.987497   14794 main.go:141] libmachine: Detecting the provisioner...
	I0318 22:40:23.987506   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHHostname
	I0318 22:40:23.989980   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:23.990298   14794 main.go:141] libmachine: (addons-935788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:69:18", ip: ""} in network mk-addons-935788: {Iface:virbr1 ExpiryTime:2024-03-18 23:40:17 +0000 UTC Type:0 Mac:52:54:00:f2:69:18 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:addons-935788 Clientid:01:52:54:00:f2:69:18}
	I0318 22:40:23.990328   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined IP address 192.168.39.13 and MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:23.990475   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHPort
	I0318 22:40:23.990626   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHKeyPath
	I0318 22:40:23.990746   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHKeyPath
	I0318 22:40:23.990849   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHUsername
	I0318 22:40:23.990964   14794 main.go:141] libmachine: Using SSH client type: native
	I0318 22:40:23.991151   14794 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I0318 22:40:23.991163   14794 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0318 22:40:24.105235   14794 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0318 22:40:24.105319   14794 main.go:141] libmachine: found compatible host: buildroot
	I0318 22:40:24.105336   14794 main.go:141] libmachine: Provisioning with buildroot...
	I0318 22:40:24.105346   14794 main.go:141] libmachine: (addons-935788) Calling .GetMachineName
	I0318 22:40:24.105585   14794 buildroot.go:166] provisioning hostname "addons-935788"
	I0318 22:40:24.105607   14794 main.go:141] libmachine: (addons-935788) Calling .GetMachineName
	I0318 22:40:24.105775   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHHostname
	I0318 22:40:24.107895   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:24.108222   14794 main.go:141] libmachine: (addons-935788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:69:18", ip: ""} in network mk-addons-935788: {Iface:virbr1 ExpiryTime:2024-03-18 23:40:17 +0000 UTC Type:0 Mac:52:54:00:f2:69:18 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:addons-935788 Clientid:01:52:54:00:f2:69:18}
	I0318 22:40:24.108251   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined IP address 192.168.39.13 and MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:24.108352   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHPort
	I0318 22:40:24.108536   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHKeyPath
	I0318 22:40:24.108701   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHKeyPath
	I0318 22:40:24.108849   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHUsername
	I0318 22:40:24.109027   14794 main.go:141] libmachine: Using SSH client type: native
	I0318 22:40:24.109208   14794 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I0318 22:40:24.109221   14794 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-935788 && echo "addons-935788" | sudo tee /etc/hostname
	I0318 22:40:24.235037   14794 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-935788
	
	I0318 22:40:24.235062   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHHostname
	I0318 22:40:24.237357   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:24.237716   14794 main.go:141] libmachine: (addons-935788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:69:18", ip: ""} in network mk-addons-935788: {Iface:virbr1 ExpiryTime:2024-03-18 23:40:17 +0000 UTC Type:0 Mac:52:54:00:f2:69:18 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:addons-935788 Clientid:01:52:54:00:f2:69:18}
	I0318 22:40:24.237743   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined IP address 192.168.39.13 and MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:24.237901   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHPort
	I0318 22:40:24.238072   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHKeyPath
	I0318 22:40:24.238202   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHKeyPath
	I0318 22:40:24.238345   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHUsername
	I0318 22:40:24.238482   14794 main.go:141] libmachine: Using SSH client type: native
	I0318 22:40:24.238676   14794 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I0318 22:40:24.238693   14794 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-935788' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-935788/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-935788' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 22:40:24.361580   14794 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 22:40:24.361611   14794 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17786-6465/.minikube CaCertPath:/home/jenkins/minikube-integration/17786-6465/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17786-6465/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17786-6465/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17786-6465/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17786-6465/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17786-6465/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17786-6465/.minikube}
	I0318 22:40:24.361631   14794 buildroot.go:174] setting up certificates
	I0318 22:40:24.361643   14794 provision.go:84] configureAuth start
	I0318 22:40:24.361652   14794 main.go:141] libmachine: (addons-935788) Calling .GetMachineName
	I0318 22:40:24.361861   14794 main.go:141] libmachine: (addons-935788) Calling .GetIP
	I0318 22:40:24.364402   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:24.364729   14794 main.go:141] libmachine: (addons-935788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:69:18", ip: ""} in network mk-addons-935788: {Iface:virbr1 ExpiryTime:2024-03-18 23:40:17 +0000 UTC Type:0 Mac:52:54:00:f2:69:18 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:addons-935788 Clientid:01:52:54:00:f2:69:18}
	I0318 22:40:24.364760   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined IP address 192.168.39.13 and MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:24.364875   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHHostname
	I0318 22:40:24.367784   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:24.368137   14794 main.go:141] libmachine: (addons-935788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:69:18", ip: ""} in network mk-addons-935788: {Iface:virbr1 ExpiryTime:2024-03-18 23:40:17 +0000 UTC Type:0 Mac:52:54:00:f2:69:18 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:addons-935788 Clientid:01:52:54:00:f2:69:18}
	I0318 22:40:24.368163   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined IP address 192.168.39.13 and MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:24.368435   14794 provision.go:143] copyHostCerts
	I0318 22:40:24.368498   14794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17786-6465/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17786-6465/.minikube/ca.pem (1082 bytes)
	I0318 22:40:24.368593   14794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17786-6465/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17786-6465/.minikube/cert.pem (1123 bytes)
	I0318 22:40:24.368647   14794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17786-6465/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17786-6465/.minikube/key.pem (1679 bytes)
	I0318 22:40:24.368691   14794 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17786-6465/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17786-6465/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17786-6465/.minikube/certs/ca-key.pem org=jenkins.addons-935788 san=[127.0.0.1 192.168.39.13 addons-935788 localhost minikube]
	I0318 22:40:24.466840   14794 provision.go:177] copyRemoteCerts
	I0318 22:40:24.466883   14794 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 22:40:24.466901   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHHostname
	I0318 22:40:24.469029   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:24.469281   14794 main.go:141] libmachine: (addons-935788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:69:18", ip: ""} in network mk-addons-935788: {Iface:virbr1 ExpiryTime:2024-03-18 23:40:17 +0000 UTC Type:0 Mac:52:54:00:f2:69:18 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:addons-935788 Clientid:01:52:54:00:f2:69:18}
	I0318 22:40:24.469307   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined IP address 192.168.39.13 and MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:24.469455   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHPort
	I0318 22:40:24.469587   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHKeyPath
	I0318 22:40:24.469703   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHUsername
	I0318 22:40:24.469817   14794 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17786-6465/.minikube/machines/addons-935788/id_rsa Username:docker}
	I0318 22:40:24.559449   14794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17786-6465/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0318 22:40:24.584576   14794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17786-6465/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0318 22:40:24.609101   14794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17786-6465/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 22:40:24.635951   14794 provision.go:87] duration metric: took 274.300489ms to configureAuth
	I0318 22:40:24.635969   14794 buildroot.go:189] setting minikube options for container-runtime
	I0318 22:40:24.636131   14794 config.go:182] Loaded profile config "addons-935788": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0318 22:40:24.636156   14794 main.go:141] libmachine: Checking connection to Docker...
	I0318 22:40:24.636172   14794 main.go:141] libmachine: (addons-935788) Calling .GetURL
	I0318 22:40:24.637300   14794 main.go:141] libmachine: (addons-935788) DBG | Using libvirt version 6000000
	I0318 22:40:24.639263   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:24.639579   14794 main.go:141] libmachine: (addons-935788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:69:18", ip: ""} in network mk-addons-935788: {Iface:virbr1 ExpiryTime:2024-03-18 23:40:17 +0000 UTC Type:0 Mac:52:54:00:f2:69:18 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:addons-935788 Clientid:01:52:54:00:f2:69:18}
	I0318 22:40:24.639612   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined IP address 192.168.39.13 and MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:24.639775   14794 main.go:141] libmachine: Docker is up and running!
	I0318 22:40:24.639787   14794 main.go:141] libmachine: Reticulating splines...
	I0318 22:40:24.639795   14794 client.go:171] duration metric: took 23.544421748s to LocalClient.Create
	I0318 22:40:24.639825   14794 start.go:167] duration metric: took 23.544488278s to libmachine.API.Create "addons-935788"
	I0318 22:40:24.639840   14794 start.go:293] postStartSetup for "addons-935788" (driver="kvm2")
	I0318 22:40:24.639854   14794 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 22:40:24.639869   14794 main.go:141] libmachine: (addons-935788) Calling .DriverName
	I0318 22:40:24.640076   14794 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 22:40:24.640096   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHHostname
	I0318 22:40:24.641846   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:24.642136   14794 main.go:141] libmachine: (addons-935788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:69:18", ip: ""} in network mk-addons-935788: {Iface:virbr1 ExpiryTime:2024-03-18 23:40:17 +0000 UTC Type:0 Mac:52:54:00:f2:69:18 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:addons-935788 Clientid:01:52:54:00:f2:69:18}
	I0318 22:40:24.642164   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined IP address 192.168.39.13 and MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:24.642279   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHPort
	I0318 22:40:24.642486   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHKeyPath
	I0318 22:40:24.642626   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHUsername
	I0318 22:40:24.642772   14794 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17786-6465/.minikube/machines/addons-935788/id_rsa Username:docker}
	I0318 22:40:24.731234   14794 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 22:40:24.735706   14794 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 22:40:24.735723   14794 filesync.go:126] Scanning /home/jenkins/minikube-integration/17786-6465/.minikube/addons for local assets ...
	I0318 22:40:24.735781   14794 filesync.go:126] Scanning /home/jenkins/minikube-integration/17786-6465/.minikube/files for local assets ...
	I0318 22:40:24.735804   14794 start.go:296] duration metric: took 95.95505ms for postStartSetup
	I0318 22:40:24.735829   14794 main.go:141] libmachine: (addons-935788) Calling .GetConfigRaw
	I0318 22:40:24.736306   14794 main.go:141] libmachine: (addons-935788) Calling .GetIP
	I0318 22:40:24.738945   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:24.739235   14794 main.go:141] libmachine: (addons-935788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:69:18", ip: ""} in network mk-addons-935788: {Iface:virbr1 ExpiryTime:2024-03-18 23:40:17 +0000 UTC Type:0 Mac:52:54:00:f2:69:18 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:addons-935788 Clientid:01:52:54:00:f2:69:18}
	I0318 22:40:24.739267   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined IP address 192.168.39.13 and MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:24.739436   14794 profile.go:142] Saving config to /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/config.json ...
	I0318 22:40:24.739623   14794 start.go:128] duration metric: took 23.659621597s to createHost
	I0318 22:40:24.739647   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHHostname
	I0318 22:40:24.741997   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:24.742294   14794 main.go:141] libmachine: (addons-935788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:69:18", ip: ""} in network mk-addons-935788: {Iface:virbr1 ExpiryTime:2024-03-18 23:40:17 +0000 UTC Type:0 Mac:52:54:00:f2:69:18 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:addons-935788 Clientid:01:52:54:00:f2:69:18}
	I0318 22:40:24.742321   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined IP address 192.168.39.13 and MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:24.742471   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHPort
	I0318 22:40:24.742681   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHKeyPath
	I0318 22:40:24.742834   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHKeyPath
	I0318 22:40:24.742955   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHUsername
	I0318 22:40:24.743099   14794 main.go:141] libmachine: Using SSH client type: native
	I0318 22:40:24.743255   14794 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.13 22 <nil> <nil>}
	I0318 22:40:24.743265   14794 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 22:40:24.857056   14794 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710801624.825354279
	
	I0318 22:40:24.857071   14794 fix.go:216] guest clock: 1710801624.825354279
	I0318 22:40:24.857077   14794 fix.go:229] Guest: 2024-03-18 22:40:24.825354279 +0000 UTC Remote: 2024-03-18 22:40:24.739635848 +0000 UTC m=+23.771567883 (delta=85.718431ms)
	I0318 22:40:24.857094   14794 fix.go:200] guest clock delta is within tolerance: 85.718431ms
	I0318 22:40:24.857099   14794 start.go:83] releasing machines lock for "addons-935788", held for 23.777160993s
	I0318 22:40:24.857114   14794 main.go:141] libmachine: (addons-935788) Calling .DriverName
	I0318 22:40:24.857334   14794 main.go:141] libmachine: (addons-935788) Calling .GetIP
	I0318 22:40:24.859502   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:24.859850   14794 main.go:141] libmachine: (addons-935788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:69:18", ip: ""} in network mk-addons-935788: {Iface:virbr1 ExpiryTime:2024-03-18 23:40:17 +0000 UTC Type:0 Mac:52:54:00:f2:69:18 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:addons-935788 Clientid:01:52:54:00:f2:69:18}
	I0318 22:40:24.859876   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined IP address 192.168.39.13 and MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:24.860010   14794 main.go:141] libmachine: (addons-935788) Calling .DriverName
	I0318 22:40:24.860509   14794 main.go:141] libmachine: (addons-935788) Calling .DriverName
	I0318 22:40:24.860659   14794 main.go:141] libmachine: (addons-935788) Calling .DriverName
	I0318 22:40:24.860733   14794 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 22:40:24.860789   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHHostname
	I0318 22:40:24.860858   14794 ssh_runner.go:195] Run: cat /version.json
	I0318 22:40:24.860885   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHHostname
	I0318 22:40:24.863099   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:24.863320   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:24.863380   14794 main.go:141] libmachine: (addons-935788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:69:18", ip: ""} in network mk-addons-935788: {Iface:virbr1 ExpiryTime:2024-03-18 23:40:17 +0000 UTC Type:0 Mac:52:54:00:f2:69:18 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:addons-935788 Clientid:01:52:54:00:f2:69:18}
	I0318 22:40:24.863400   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined IP address 192.168.39.13 and MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:24.863567   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHPort
	I0318 22:40:24.863704   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHKeyPath
	I0318 22:40:24.863808   14794 main.go:141] libmachine: (addons-935788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:69:18", ip: ""} in network mk-addons-935788: {Iface:virbr1 ExpiryTime:2024-03-18 23:40:17 +0000 UTC Type:0 Mac:52:54:00:f2:69:18 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:addons-935788 Clientid:01:52:54:00:f2:69:18}
	I0318 22:40:24.863828   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHUsername
	I0318 22:40:24.863842   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined IP address 192.168.39.13 and MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:24.863969   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHPort
	I0318 22:40:24.863980   14794 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17786-6465/.minikube/machines/addons-935788/id_rsa Username:docker}
	I0318 22:40:24.864124   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHKeyPath
	I0318 22:40:24.864272   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHUsername
	I0318 22:40:24.864422   14794 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17786-6465/.minikube/machines/addons-935788/id_rsa Username:docker}
	I0318 22:40:24.955083   14794 ssh_runner.go:195] Run: systemctl --version
	I0318 22:40:24.984330   14794 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 22:40:24.990399   14794 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 22:40:24.990445   14794 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 22:40:25.007429   14794 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 22:40:25.007443   14794 start.go:494] detecting cgroup driver to use...
	I0318 22:40:25.007486   14794 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0318 22:40:25.037694   14794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 22:40:25.051029   14794 docker.go:217] disabling cri-docker service (if available) ...
	I0318 22:40:25.051064   14794 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 22:40:25.064455   14794 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 22:40:25.077428   14794 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 22:40:25.188221   14794 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 22:40:25.340922   14794 docker.go:233] disabling docker service ...
	I0318 22:40:25.340982   14794 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 22:40:25.357021   14794 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 22:40:25.371041   14794 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 22:40:25.503541   14794 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 22:40:25.617581   14794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 22:40:25.631820   14794 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 22:40:25.651369   14794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0318 22:40:25.663022   14794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 22:40:25.674507   14794 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 22:40:25.674545   14794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 22:40:25.686134   14794 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 22:40:25.697587   14794 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 22:40:25.709041   14794 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 22:40:25.720468   14794 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 22:40:25.731973   14794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 22:40:25.743540   14794 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0318 22:40:25.755175   14794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0318 22:40:25.766712   14794 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 22:40:25.777014   14794 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 22:40:25.777043   14794 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 22:40:25.791550   14794 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 22:40:25.801803   14794 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 22:40:25.930561   14794 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 22:40:25.960597   14794 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0318 22:40:25.960682   14794 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0318 22:40:25.966041   14794 retry.go:31] will retry after 996.840828ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0318 22:40:26.963648   14794 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0318 22:40:26.969593   14794 start.go:562] Will wait 60s for crictl version
	I0318 22:40:26.969664   14794 ssh_runner.go:195] Run: which crictl
	I0318 22:40:26.973960   14794 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 22:40:27.009067   14794 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.14
	RuntimeApiVersion:  v1
	I0318 22:40:27.009146   14794 ssh_runner.go:195] Run: containerd --version
	I0318 22:40:27.037104   14794 ssh_runner.go:195] Run: containerd --version
	I0318 22:40:27.067023   14794 out.go:177] * Preparing Kubernetes v1.29.3 on containerd 1.7.14 ...
	I0318 22:40:27.068232   14794 main.go:141] libmachine: (addons-935788) Calling .GetIP
	I0318 22:40:27.070618   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:27.070981   14794 main.go:141] libmachine: (addons-935788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:69:18", ip: ""} in network mk-addons-935788: {Iface:virbr1 ExpiryTime:2024-03-18 23:40:17 +0000 UTC Type:0 Mac:52:54:00:f2:69:18 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:addons-935788 Clientid:01:52:54:00:f2:69:18}
	I0318 22:40:27.071008   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined IP address 192.168.39.13 and MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:27.071206   14794 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 22:40:27.075572   14794 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 22:40:27.089776   14794 kubeadm.go:877] updating cluster {Name:addons-935788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.
3 ClusterName:addons-935788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 22:40:27.089906   14794 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0318 22:40:27.089963   14794 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 22:40:27.124467   14794 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0318 22:40:27.124527   14794 ssh_runner.go:195] Run: which lz4
	I0318 22:40:27.128694   14794 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 22:40:27.133182   14794 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 22:40:27.133206   14794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17786-6465/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (402346652 bytes)
	I0318 22:40:28.590563   14794 containerd.go:563] duration metric: took 1.461901695s to copy over tarball
	I0318 22:40:28.590629   14794 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 22:40:31.107254   14794 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.5166038s)
	I0318 22:40:31.107281   14794 containerd.go:570] duration metric: took 2.516694354s to extract the tarball
	I0318 22:40:31.107295   14794 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 22:40:31.148144   14794 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 22:40:31.267818   14794 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 22:40:31.291092   14794 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 22:40:31.340260   14794 retry.go:31] will retry after 289.119181ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-03-18T22:40:31Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0318 22:40:31.629700   14794 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 22:40:31.676372   14794 containerd.go:627] all images are preloaded for containerd runtime.
	I0318 22:40:31.676411   14794 cache_images.go:84] Images are preloaded, skipping loading
	I0318 22:40:31.676419   14794 kubeadm.go:928] updating node { 192.168.39.13 8443 v1.29.3 containerd true true} ...
	I0318 22:40:31.676524   14794 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-935788 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:addons-935788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 22:40:31.676590   14794 ssh_runner.go:195] Run: sudo crictl info
	I0318 22:40:31.714787   14794 cni.go:84] Creating CNI manager for ""
	I0318 22:40:31.714810   14794 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0318 22:40:31.714820   14794 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 22:40:31.714838   14794 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.13 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-935788 NodeName:addons-935788 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 22:40:31.714943   14794 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-935788"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 22:40:31.714997   14794 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0318 22:40:31.726551   14794 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 22:40:31.726599   14794 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 22:40:31.737518   14794 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0318 22:40:31.755542   14794 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 22:40:31.773503   14794 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2170 bytes)
	I0318 22:40:31.792130   14794 ssh_runner.go:195] Run: grep 192.168.39.13	control-plane.minikube.internal$ /etc/hosts
	I0318 22:40:31.796301   14794 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 22:40:31.809753   14794 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 22:40:31.928176   14794 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 22:40:31.950780   14794 certs.go:68] Setting up /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788 for IP: 192.168.39.13
	I0318 22:40:31.950805   14794 certs.go:194] generating shared ca certs ...
	I0318 22:40:31.950825   14794 certs.go:226] acquiring lock for ca certs: {Name:mk494c5ab68cbda5e4ad4e6c5f88f0bc4c163214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:40:31.950974   14794 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/17786-6465/.minikube/ca.key
	I0318 22:40:32.060603   14794 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17786-6465/.minikube/ca.crt ...
	I0318 22:40:32.060630   14794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17786-6465/.minikube/ca.crt: {Name:mke52d738f44026807f5dd637c852c1547445699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:40:32.060806   14794 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17786-6465/.minikube/ca.key ...
	I0318 22:40:32.060821   14794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17786-6465/.minikube/ca.key: {Name:mk3f39b42f5833753b2ad9fecc3a59a61d01bb49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:40:32.060911   14794 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17786-6465/.minikube/proxy-client-ca.key
	I0318 22:40:32.281252   14794 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17786-6465/.minikube/proxy-client-ca.crt ...
	I0318 22:40:32.281284   14794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17786-6465/.minikube/proxy-client-ca.crt: {Name:mk2accacc8e0644ab78253c4284d7b895aa2030a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:40:32.281448   14794 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17786-6465/.minikube/proxy-client-ca.key ...
	I0318 22:40:32.281462   14794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17786-6465/.minikube/proxy-client-ca.key: {Name:mk85be071d44d6e6f99fda7205b3a7f2a8e5eccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:40:32.281549   14794 certs.go:256] generating profile certs ...
	I0318 22:40:32.281614   14794 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.key
	I0318 22:40:32.281633   14794 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.crt with IP's: []
	I0318 22:40:32.461524   14794 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.crt ...
	I0318 22:40:32.461553   14794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.crt: {Name:mk9cc4d80a30c430cc04eeddf35cedf00f8c1d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:40:32.461711   14794 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.key ...
	I0318 22:40:32.461725   14794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.key: {Name:mkd3cfa7d8dc000bd01491bb0589adcf03f95038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:40:32.461827   14794 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/apiserver.key.4acfc1c0
	I0318 22:40:32.461850   14794 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/apiserver.crt.4acfc1c0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.13]
	I0318 22:40:32.614110   14794 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/apiserver.crt.4acfc1c0 ...
	I0318 22:40:32.614142   14794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/apiserver.crt.4acfc1c0: {Name:mkcd8c1bf27d8270f3cf8470e5d6b075148f0f62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:40:32.614304   14794 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/apiserver.key.4acfc1c0 ...
	I0318 22:40:32.614322   14794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/apiserver.key.4acfc1c0: {Name:mk7a0710e8653ae385b8ba909c762a1af19474a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:40:32.614412   14794 certs.go:381] copying /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/apiserver.crt.4acfc1c0 -> /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/apiserver.crt
	I0318 22:40:32.614511   14794 certs.go:385] copying /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/apiserver.key.4acfc1c0 -> /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/apiserver.key
	I0318 22:40:32.614593   14794 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/proxy-client.key
	I0318 22:40:32.614630   14794 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/proxy-client.crt with IP's: []
	I0318 22:40:32.912868   14794 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/proxy-client.crt ...
	I0318 22:40:32.912899   14794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/proxy-client.crt: {Name:mkad950d4dd9a2b3062a8813fb8568e0ea1d8e69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:40:32.913083   14794 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/proxy-client.key ...
	I0318 22:40:32.913099   14794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/proxy-client.key: {Name:mkee88536c2f1c5b3839684491d6d647214c3496 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:40:32.913311   14794 certs.go:484] found cert: /home/jenkins/minikube-integration/17786-6465/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 22:40:32.913352   14794 certs.go:484] found cert: /home/jenkins/minikube-integration/17786-6465/.minikube/certs/ca.pem (1082 bytes)
	I0318 22:40:32.913387   14794 certs.go:484] found cert: /home/jenkins/minikube-integration/17786-6465/.minikube/certs/cert.pem (1123 bytes)
	I0318 22:40:32.913418   14794 certs.go:484] found cert: /home/jenkins/minikube-integration/17786-6465/.minikube/certs/key.pem (1679 bytes)
	I0318 22:40:32.914032   14794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17786-6465/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 22:40:32.940651   14794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17786-6465/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 22:40:32.965933   14794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17786-6465/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 22:40:32.990784   14794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17786-6465/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0318 22:40:33.015555   14794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0318 22:40:33.040359   14794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 22:40:33.065159   14794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 22:40:33.090125   14794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 22:40:33.115140   14794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17786-6465/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 22:40:33.139906   14794 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 22:40:33.157467   14794 ssh_runner.go:195] Run: openssl version
	I0318 22:40:33.163454   14794 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 22:40:33.175237   14794 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 22:40:33.180081   14794 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 22:40 /usr/share/ca-certificates/minikubeCA.pem
	I0318 22:40:33.180128   14794 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 22:40:33.185949   14794 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 22:40:33.197657   14794 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 22:40:33.202154   14794 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 22:40:33.202204   14794 kubeadm.go:391] StartCluster: {Name:addons-935788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 C
lusterName:addons-935788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 22:40:33.202305   14794 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0318 22:40:33.202368   14794 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 22:40:33.241445   14794 cri.go:89] found id: ""
	I0318 22:40:33.241505   14794 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0318 22:40:33.252440   14794 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 22:40:33.263218   14794 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 22:40:33.273516   14794 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 22:40:33.273533   14794 kubeadm.go:156] found existing configuration files:
	
	I0318 22:40:33.273563   14794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 22:40:33.283213   14794 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 22:40:33.283249   14794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 22:40:33.293356   14794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 22:40:33.303113   14794 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 22:40:33.303145   14794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 22:40:33.313462   14794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 22:40:33.323134   14794 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 22:40:33.323186   14794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 22:40:33.333144   14794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 22:40:33.342862   14794 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 22:40:33.342907   14794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 22:40:33.352902   14794 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 22:40:33.543141   14794 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 22:40:43.486507   14794 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0318 22:40:43.486602   14794 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 22:40:43.486707   14794 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 22:40:43.486828   14794 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 22:40:43.486969   14794 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 22:40:43.487071   14794 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 22:40:43.488582   14794 out.go:204]   - Generating certificates and keys ...
	I0318 22:40:43.488665   14794 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 22:40:43.488739   14794 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 22:40:43.488820   14794 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0318 22:40:43.488888   14794 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0318 22:40:43.488946   14794 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0318 22:40:43.488995   14794 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0318 22:40:43.489068   14794 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0318 22:40:43.489193   14794 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-935788 localhost] and IPs [192.168.39.13 127.0.0.1 ::1]
	I0318 22:40:43.489240   14794 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0318 22:40:43.489361   14794 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-935788 localhost] and IPs [192.168.39.13 127.0.0.1 ::1]
	I0318 22:40:43.489458   14794 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0318 22:40:43.489549   14794 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0318 22:40:43.489619   14794 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0318 22:40:43.489706   14794 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 22:40:43.489791   14794 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 22:40:43.489872   14794 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0318 22:40:43.489958   14794 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 22:40:43.490055   14794 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 22:40:43.490125   14794 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 22:40:43.490213   14794 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 22:40:43.490312   14794 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 22:40:43.491626   14794 out.go:204]   - Booting up control plane ...
	I0318 22:40:43.491714   14794 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 22:40:43.491803   14794 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 22:40:43.491888   14794 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 22:40:43.492032   14794 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 22:40:43.492190   14794 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 22:40:43.492267   14794 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 22:40:43.492474   14794 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 22:40:43.492583   14794 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.504685 seconds
	I0318 22:40:43.492705   14794 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 22:40:43.492864   14794 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 22:40:43.492917   14794 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 22:40:43.493083   14794 kubeadm.go:309] [mark-control-plane] Marking the node addons-935788 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 22:40:43.493159   14794 kubeadm.go:309] [bootstrap-token] Using token: zxd3s9.9xkr3vvgp5th630f
	I0318 22:40:43.494600   14794 out.go:204]   - Configuring RBAC rules ...
	I0318 22:40:43.494711   14794 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 22:40:43.494795   14794 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 22:40:43.494958   14794 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 22:40:43.495126   14794 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 22:40:43.495301   14794 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 22:40:43.495406   14794 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 22:40:43.495570   14794 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 22:40:43.495648   14794 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 22:40:43.495694   14794 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 22:40:43.495704   14794 kubeadm.go:309] 
	I0318 22:40:43.495786   14794 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 22:40:43.495796   14794 kubeadm.go:309] 
	I0318 22:40:43.495896   14794 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 22:40:43.495906   14794 kubeadm.go:309] 
	I0318 22:40:43.495952   14794 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 22:40:43.496040   14794 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 22:40:43.496117   14794 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 22:40:43.496128   14794 kubeadm.go:309] 
	I0318 22:40:43.496215   14794 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 22:40:43.496223   14794 kubeadm.go:309] 
	I0318 22:40:43.496261   14794 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 22:40:43.496267   14794 kubeadm.go:309] 
	I0318 22:40:43.496308   14794 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 22:40:43.496373   14794 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 22:40:43.496480   14794 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 22:40:43.496490   14794 kubeadm.go:309] 
	I0318 22:40:43.496588   14794 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 22:40:43.496692   14794 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 22:40:43.496701   14794 kubeadm.go:309] 
	I0318 22:40:43.496808   14794 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token zxd3s9.9xkr3vvgp5th630f \
	I0318 22:40:43.496928   14794 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e72ae7ca8d35d2ca2e25f6834a2fd39b17c059d482b964fe92a8a76d419f4544 \
	I0318 22:40:43.496960   14794 kubeadm.go:309] 	--control-plane 
	I0318 22:40:43.496974   14794 kubeadm.go:309] 
	I0318 22:40:43.497090   14794 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 22:40:43.497099   14794 kubeadm.go:309] 
	I0318 22:40:43.497180   14794 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token zxd3s9.9xkr3vvgp5th630f \
	I0318 22:40:43.497324   14794 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e72ae7ca8d35d2ca2e25f6834a2fd39b17c059d482b964fe92a8a76d419f4544 
	I0318 22:40:43.497337   14794 cni.go:84] Creating CNI manager for ""
	I0318 22:40:43.497343   14794 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0318 22:40:43.498929   14794 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 22:40:43.500323   14794 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 22:40:43.515851   14794 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 22:40:43.546530   14794 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 22:40:43.546604   14794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:40:43.546626   14794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-935788 minikube.k8s.io/updated_at=2024_03_18T22_40_43_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a199844351973d00eb5dd1cc0bf4d2238e461f04 minikube.k8s.io/name=addons-935788 minikube.k8s.io/primary=true
	I0318 22:40:43.668490   14794 ops.go:34] apiserver oom_adj: -16
	I0318 22:40:43.737428   14794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:40:44.238225   14794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:40:44.738128   14794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:40:45.237532   14794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:40:45.738373   14794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:40:46.237482   14794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:40:46.738451   14794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:40:47.238432   14794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:40:47.737806   14794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:40:48.237504   14794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:40:48.738227   14794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:40:49.237465   14794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:40:49.737872   14794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:40:50.238329   14794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:40:50.737574   14794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:40:51.237541   14794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:40:51.737801   14794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:40:52.237675   14794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:40:52.737461   14794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:40:53.238477   14794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:40:53.738299   14794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:40:54.237803   14794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:40:54.738025   14794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:40:55.237860   14794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:40:55.737514   14794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:40:56.238486   14794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:40:56.357269   14794 kubeadm.go:1107] duration metric: took 12.810723449s to wait for elevateKubeSystemPrivileges
	W0318 22:40:56.357315   14794 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 22:40:56.357324   14794 kubeadm.go:393] duration metric: took 23.155124531s to StartCluster
	I0318 22:40:56.357342   14794 settings.go:142] acquiring lock: {Name:mk6c1098321ed076532e23e387d7a210321d8ece Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:40:56.357458   14794 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17786-6465/kubeconfig
	I0318 22:40:56.357878   14794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17786-6465/kubeconfig: {Name:mk903dd78f7d9909dc7d1568144bb5ea14e38c35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:40:56.358076   14794 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0318 22:40:56.358097   14794 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.13 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0318 22:40:56.359898   14794 out.go:177] * Verifying Kubernetes components...
	I0318 22:40:56.358151   14794 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0318 22:40:56.358293   14794 config.go:182] Loaded profile config "addons-935788": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0318 22:40:56.361344   14794 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 22:40:56.361363   14794 addons.go:69] Setting cloud-spanner=true in profile "addons-935788"
	I0318 22:40:56.361381   14794 addons.go:69] Setting yakd=true in profile "addons-935788"
	I0318 22:40:56.361408   14794 addons.go:69] Setting inspektor-gadget=true in profile "addons-935788"
	I0318 22:40:56.361419   14794 addons.go:69] Setting metrics-server=true in profile "addons-935788"
	I0318 22:40:56.361421   14794 addons.go:69] Setting gcp-auth=true in profile "addons-935788"
	I0318 22:40:56.361435   14794 addons.go:234] Setting addon metrics-server=true in "addons-935788"
	I0318 22:40:56.361444   14794 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-935788"
	I0318 22:40:56.361451   14794 addons.go:69] Setting helm-tiller=true in profile "addons-935788"
	I0318 22:40:56.361455   14794 addons.go:69] Setting storage-provisioner=true in profile "addons-935788"
	I0318 22:40:56.361474   14794 host.go:66] Checking if "addons-935788" exists ...
	I0318 22:40:56.361482   14794 addons.go:234] Setting addon storage-provisioner=true in "addons-935788"
	I0318 22:40:56.361484   14794 addons.go:69] Setting ingress=true in profile "addons-935788"
	I0318 22:40:56.361473   14794 addons.go:69] Setting default-storageclass=true in profile "addons-935788"
	I0318 22:40:56.361508   14794 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-935788"
	I0318 22:40:56.361511   14794 host.go:66] Checking if "addons-935788" exists ...
	I0318 22:40:56.361513   14794 addons.go:234] Setting addon ingress=true in "addons-935788"
	I0318 22:40:56.361528   14794 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-935788"
	I0318 22:40:56.361532   14794 host.go:66] Checking if "addons-935788" exists ...
	I0318 22:40:56.361554   14794 host.go:66] Checking if "addons-935788" exists ...
	I0318 22:40:56.361563   14794 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-935788"
	I0318 22:40:56.361560   14794 addons.go:69] Setting ingress-dns=true in profile "addons-935788"
	I0318 22:40:56.361595   14794 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-935788"
	I0318 22:40:56.361605   14794 addons.go:234] Setting addon ingress-dns=true in "addons-935788"
	I0318 22:40:56.361644   14794 host.go:66] Checking if "addons-935788" exists ...
	I0318 22:40:56.361739   14794 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-935788"
	I0318 22:40:56.361760   14794 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-935788"
	I0318 22:40:56.361784   14794 host.go:66] Checking if "addons-935788" exists ...
	I0318 22:40:56.361544   14794 addons.go:69] Setting volumesnapshots=true in profile "addons-935788"
	I0318 22:40:56.361951   14794 addons.go:234] Setting addon volumesnapshots=true in "addons-935788"
	I0318 22:40:56.361955   14794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:40:56.361436   14794 mustload.go:65] Loading cluster: addons-935788
	I0318 22:40:56.361975   14794 host.go:66] Checking if "addons-935788" exists ...
	I0318 22:40:56.361979   14794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:40:56.361993   14794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:40:56.362005   14794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:40:56.362019   14794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:40:56.362027   14794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:40:56.361475   14794 addons.go:234] Setting addon helm-tiller=true in "addons-935788"
	I0318 22:40:56.361410   14794 addons.go:234] Setting addon cloud-spanner=true in "addons-935788"
	I0318 22:40:56.362000   14794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:40:56.362093   14794 addons.go:69] Setting registry=true in profile "addons-935788"
	I0318 22:40:56.362098   14794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:40:56.362104   14794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:40:56.362113   14794 addons.go:234] Setting addon registry=true in "addons-935788"
	I0318 22:40:56.362115   14794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:40:56.361954   14794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:40:56.362167   14794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:40:56.362115   14794 config.go:182] Loaded profile config "addons-935788": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0318 22:40:56.361438   14794 addons.go:234] Setting addon inspektor-gadget=true in "addons-935788"
	I0318 22:40:56.362750   14794 host.go:66] Checking if "addons-935788" exists ...
	I0318 22:40:56.361410   14794 addons.go:234] Setting addon yakd=true in "addons-935788"
	I0318 22:40:56.362808   14794 host.go:66] Checking if "addons-935788" exists ...
	I0318 22:40:56.362285   14794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:40:56.362311   14794 host.go:66] Checking if "addons-935788" exists ...
	I0318 22:40:56.362328   14794 host.go:66] Checking if "addons-935788" exists ...
	I0318 22:40:56.363151   14794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:40:56.363154   14794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:40:56.363176   14794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:40:56.363184   14794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:40:56.363262   14794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:40:56.363281   14794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:40:56.363739   14794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:40:56.363788   14794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:40:56.363942   14794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:40:56.363985   14794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:40:56.362349   14794 host.go:66] Checking if "addons-935788" exists ...
	I0318 22:40:56.364579   14794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:40:56.364597   14794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:40:56.362455   14794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:40:56.372544   14794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:40:56.362883   14794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:40:56.382967   14794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46033
	I0318 22:40:56.383160   14794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38651
	I0318 22:40:56.383329   14794 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:40:56.383555   14794 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:40:56.383817   14794 main.go:141] libmachine: Using API Version  1
	I0318 22:40:56.383844   14794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:40:56.384053   14794 main.go:141] libmachine: Using API Version  1
	I0318 22:40:56.384072   14794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:40:56.384129   14794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35399
	I0318 22:40:56.384266   14794 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:40:56.384410   14794 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:40:56.384649   14794 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:40:56.385162   14794 main.go:141] libmachine: Using API Version  1
	I0318 22:40:56.385178   14794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:40:56.385388   14794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:40:56.385419   14794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:40:56.385569   14794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:40:56.385594   14794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:40:56.385825   14794 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:40:56.391168   14794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43197
	I0318 22:40:56.392862   14794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:40:56.392903   14794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:40:56.395450   14794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41275
	I0318 22:40:56.395551   14794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40723
	I0318 22:40:56.395618   14794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46391
	I0318 22:40:56.395682   14794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40441
	I0318 22:40:56.395913   14794 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:40:56.396069   14794 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:40:56.396477   14794 main.go:141] libmachine: Using API Version  1
	I0318 22:40:56.396501   14794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:40:56.396570   14794 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:40:56.396748   14794 main.go:141] libmachine: Using API Version  1
	I0318 22:40:56.396766   14794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:40:56.396800   14794 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:40:56.396822   14794 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:40:56.397319   14794 main.go:141] libmachine: Using API Version  1
	I0318 22:40:56.397338   14794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:40:56.397356   14794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:40:56.397377   14794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:40:56.397458   14794 main.go:141] libmachine: Using API Version  1
	I0318 22:40:56.397470   14794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:40:56.397539   14794 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:40:56.397956   14794 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:40:56.398102   14794 main.go:141] libmachine: Using API Version  1
	I0318 22:40:56.398117   14794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:40:56.398641   14794 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:40:56.398710   14794 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:40:56.398754   14794 main.go:141] libmachine: (addons-935788) Calling .GetState
	I0318 22:40:56.399142   14794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:40:56.399172   14794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:40:56.404704   14794 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:40:56.404729   14794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:40:56.404773   14794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:40:56.405100   14794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:40:56.405129   14794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:40:56.405245   14794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:40:56.405265   14794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:40:56.407184   14794 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-935788"
	I0318 22:40:56.407247   14794 host.go:66] Checking if "addons-935788" exists ...
	I0318 22:40:56.407637   14794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:40:56.407686   14794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:40:56.414910   14794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39163
	I0318 22:40:56.415141   14794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44337
	I0318 22:40:56.415804   14794 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:40:56.416239   14794 main.go:141] libmachine: Using API Version  1
	I0318 22:40:56.416255   14794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:40:56.416637   14794 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:40:56.416798   14794 main.go:141] libmachine: (addons-935788) Calling .GetState
	I0318 22:40:56.417855   14794 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:40:56.418630   14794 main.go:141] libmachine: (addons-935788) Calling .DriverName
	I0318 22:40:56.419038   14794 main.go:141] libmachine: Using API Version  1
	I0318 22:40:56.419061   14794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:40:56.419358   14794 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:40:56.419874   14794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:40:56.419907   14794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:40:56.421789   14794 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0318 22:40:56.423226   14794 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0318 22:40:56.423245   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0318 22:40:56.423262   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHHostname
	I0318 22:40:56.426581   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:56.427140   14794 main.go:141] libmachine: (addons-935788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:69:18", ip: ""} in network mk-addons-935788: {Iface:virbr1 ExpiryTime:2024-03-18 23:40:17 +0000 UTC Type:0 Mac:52:54:00:f2:69:18 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:addons-935788 Clientid:01:52:54:00:f2:69:18}
	I0318 22:40:56.427162   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined IP address 192.168.39.13 and MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:56.427346   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHPort
	I0318 22:40:56.427502   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHKeyPath
	I0318 22:40:56.427643   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHUsername
	I0318 22:40:56.427775   14794 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17786-6465/.minikube/machines/addons-935788/id_rsa Username:docker}
	I0318 22:40:56.431754   14794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33485
	I0318 22:40:56.432155   14794 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:40:56.432788   14794 main.go:141] libmachine: Using API Version  1
	I0318 22:40:56.432810   14794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:40:56.432819   14794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43999
	I0318 22:40:56.433192   14794 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:40:56.433257   14794 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:40:56.433756   14794 main.go:141] libmachine: Using API Version  1
	I0318 22:40:56.433780   14794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:40:56.433799   14794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:40:56.433838   14794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:40:56.434100   14794 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:40:56.434257   14794 main.go:141] libmachine: (addons-935788) Calling .GetState
	I0318 22:40:56.435846   14794 main.go:141] libmachine: (addons-935788) Calling .DriverName
	I0318 22:40:56.437507   14794 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0318 22:40:56.438719   14794 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0318 22:40:56.438740   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0318 22:40:56.438030   14794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45879
	I0318 22:40:56.438762   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHHostname
	I0318 22:40:56.438059   14794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41489
	I0318 22:40:56.439122   14794 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:40:56.439193   14794 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:40:56.439603   14794 main.go:141] libmachine: Using API Version  1
	I0318 22:40:56.439624   14794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:40:56.439939   14794 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:40:56.440469   14794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:40:56.440507   14794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:40:56.440764   14794 main.go:141] libmachine: Using API Version  1
	I0318 22:40:56.440781   14794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:40:56.440840   14794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42527
	I0318 22:40:56.441238   14794 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:40:56.441839   14794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:40:56.441879   14794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:40:56.442055   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:56.442227   14794 main.go:141] libmachine: (addons-935788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:69:18", ip: ""} in network mk-addons-935788: {Iface:virbr1 ExpiryTime:2024-03-18 23:40:17 +0000 UTC Type:0 Mac:52:54:00:f2:69:18 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:addons-935788 Clientid:01:52:54:00:f2:69:18}
	I0318 22:40:56.442250   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined IP address 192.168.39.13 and MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:56.442458   14794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40771
	I0318 22:40:56.442494   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHPort
	I0318 22:40:56.442642   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHKeyPath
	I0318 22:40:56.442776   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHUsername
	I0318 22:40:56.442937   14794 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17786-6465/.minikube/machines/addons-935788/id_rsa Username:docker}
	I0318 22:40:56.443301   14794 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:40:56.443754   14794 main.go:141] libmachine: Using API Version  1
	I0318 22:40:56.443779   14794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:40:56.444054   14794 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:40:56.444202   14794 main.go:141] libmachine: (addons-935788) Calling .GetState
	I0318 22:40:56.445578   14794 main.go:141] libmachine: (addons-935788) Calling .DriverName
	I0318 22:40:56.447298   14794 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0318 22:40:56.448471   14794 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 22:40:56.448487   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 22:40:56.448503   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHHostname
	I0318 22:40:56.447778   14794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39625
	I0318 22:40:56.448137   14794 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:40:56.449365   14794 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:40:56.449789   14794 main.go:141] libmachine: Using API Version  1
	I0318 22:40:56.449804   14794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:40:56.450112   14794 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:40:56.450275   14794 main.go:141] libmachine: (addons-935788) Calling .GetState
	I0318 22:40:56.452108   14794 main.go:141] libmachine: (addons-935788) Calling .DriverName
	I0318 22:40:56.452492   14794 main.go:141] libmachine: Using API Version  1
	I0318 22:40:56.452507   14794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:40:56.452558   14794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34871
	I0318 22:40:56.452819   14794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38399
	I0318 22:40:56.452949   14794 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:40:56.453214   14794 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:40:56.453278   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:56.453709   14794 main.go:141] libmachine: (addons-935788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:69:18", ip: ""} in network mk-addons-935788: {Iface:virbr1 ExpiryTime:2024-03-18 23:40:17 +0000 UTC Type:0 Mac:52:54:00:f2:69:18 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:addons-935788 Clientid:01:52:54:00:f2:69:18}
	I0318 22:40:56.453733   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined IP address 192.168.39.13 and MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:56.453811   14794 main.go:141] libmachine: Using API Version  1
	I0318 22:40:56.453826   14794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:40:56.453842   14794 main.go:141] libmachine: Using API Version  1
	I0318 22:40:56.453853   14794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:40:56.454208   14794 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:40:56.454331   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHPort
	I0318 22:40:56.454754   14794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:40:56.454785   14794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:40:56.455009   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHKeyPath
	I0318 22:40:56.455173   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHUsername
	I0318 22:40:56.455400   14794 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:40:56.455405   14794 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17786-6465/.minikube/machines/addons-935788/id_rsa Username:docker}
	I0318 22:40:56.455562   14794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37323
	I0318 22:40:56.455685   14794 main.go:141] libmachine: (addons-935788) Calling .GetState
	I0318 22:40:56.455966   14794 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:40:56.456429   14794 main.go:141] libmachine: Using API Version  1
	I0318 22:40:56.456446   14794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:40:56.456782   14794 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:40:56.458303   14794 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0318 22:40:56.457122   14794 main.go:141] libmachine: (addons-935788) Calling .GetState
	I0318 22:40:56.457395   14794 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:40:56.459350   14794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33387
	I0318 22:40:56.461152   14794 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0318 22:40:56.460313   14794 main.go:141] libmachine: (addons-935788) Calling .DriverName
	I0318 22:40:56.460577   14794 main.go:141] libmachine: (addons-935788) Calling .GetState
	I0318 22:40:56.461666   14794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37691
	I0318 22:40:56.461712   14794 main.go:141] libmachine: (addons-935788) Calling .DriverName
	I0318 22:40:56.463056   14794 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0318 22:40:56.464848   14794 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0318 22:40:56.464867   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0318 22:40:56.464886   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHHostname
	I0318 22:40:56.464894   14794 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:40:56.466157   14794 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0318 22:40:56.463754   14794 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:40:56.464202   14794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44109
	I0318 22:40:56.465098   14794 host.go:66] Checking if "addons-935788" exists ...
	I0318 22:40:56.465538   14794 main.go:141] libmachine: Using API Version  1
	I0318 22:40:56.465609   14794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34971
	I0318 22:40:56.467805   14794 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0318 22:40:56.467885   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0318 22:40:56.467900   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHHostname
	I0318 22:40:56.467848   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:56.467940   14794 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0318 22:40:56.469112   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHPort
	I0318 22:40:56.469168   14794 main.go:141] libmachine: (addons-935788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:69:18", ip: ""} in network mk-addons-935788: {Iface:virbr1 ExpiryTime:2024-03-18 23:40:17 +0000 UTC Type:0 Mac:52:54:00:f2:69:18 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:addons-935788 Clientid:01:52:54:00:f2:69:18}
	I0318 22:40:56.469181   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined IP address 192.168.39.13 and MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:56.469239   14794 main.go:141] libmachine: Using API Version  1
	I0318 22:40:56.469260   14794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:40:56.470426   14794 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0318 22:40:56.468339   14794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:40:56.468821   14794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:40:56.469360   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHKeyPath
	I0318 22:40:56.469774   14794 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:40:56.469805   14794 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:40:56.470130   14794 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:40:56.471579   14794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:40:56.472725   14794 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0318 22:40:56.471769   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:56.471797   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHUsername
	I0318 22:40:56.471810   14794 main.go:141] libmachine: (addons-935788) Calling .GetState
	I0318 22:40:56.472189   14794 main.go:141] libmachine: Using API Version  1
	I0318 22:40:56.472227   14794 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:40:56.472529   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHPort
	I0318 22:40:56.473017   14794 main.go:141] libmachine: Using API Version  1
	I0318 22:40:56.473721   14794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:40:56.474793   14794 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0318 22:40:56.473780   14794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:40:56.473784   14794 main.go:141] libmachine: (addons-935788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:69:18", ip: ""} in network mk-addons-935788: {Iface:virbr1 ExpiryTime:2024-03-18 23:40:17 +0000 UTC Type:0 Mac:52:54:00:f2:69:18 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:addons-935788 Clientid:01:52:54:00:f2:69:18}
	I0318 22:40:56.474861   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined IP address 192.168.39.13 and MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:56.474005   14794 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17786-6465/.minikube/machines/addons-935788/id_rsa Username:docker}
	I0318 22:40:56.474014   14794 main.go:141] libmachine: (addons-935788) Calling .GetState
	I0318 22:40:56.474046   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHKeyPath
	I0318 22:40:56.474323   14794 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:40:56.474478   14794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32835
	I0318 22:40:56.474998   14794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45951
	I0318 22:40:56.476134   14794 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:40:56.477321   14794 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0318 22:40:56.475924   14794 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:40:56.475996   14794 main.go:141] libmachine: (addons-935788) Calling .DriverName
	I0318 22:40:56.476019   14794 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:40:56.475168   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHUsername
	I0318 22:40:56.476535   14794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:40:56.477376   14794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:40:56.476600   14794 main.go:141] libmachine: Using API Version  1
	I0318 22:40:56.476747   14794 main.go:141] libmachine: (addons-935788) Calling .DriverName
	I0318 22:40:56.477965   14794 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17786-6465/.minikube/machines/addons-935788/id_rsa Username:docker}
	I0318 22:40:56.478704   14794 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0318 22:40:56.479868   14794 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0318 22:40:56.478724   14794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:40:56.478047   14794 main.go:141] libmachine: Using API Version  1
	I0318 22:40:56.478209   14794 main.go:141] libmachine: (addons-935788) Calling .GetState
	I0318 22:40:56.481007   14794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:40:56.481531   14794 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:40:56.482209   14794 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0318 22:40:56.483816   14794 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0318 22:40:56.483834   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0318 22:40:56.483850   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHHostname
	I0318 22:40:56.482145   14794 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0318 22:40:56.482175   14794 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0
	I0318 22:40:56.482444   14794 main.go:141] libmachine: (addons-935788) Calling .GetState
	I0318 22:40:56.482496   14794 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:40:56.482890   14794 main.go:141] libmachine: (addons-935788) Calling .DriverName
	I0318 22:40:56.486125   14794 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0318 22:40:56.486141   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0318 22:40:56.486156   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHHostname
	I0318 22:40:56.485050   14794 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0318 22:40:56.486207   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0318 22:40:56.486225   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHHostname
	I0318 22:40:56.487501   14794 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0318 22:40:56.485367   14794 main.go:141] libmachine: (addons-935788) Calling .GetState
	I0318 22:40:56.488892   14794 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0318 22:40:56.488904   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0318 22:40:56.488919   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHHostname
	I0318 22:40:56.491241   14794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42399
	I0318 22:40:56.491608   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:56.491621   14794 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:40:56.492158   14794 main.go:141] libmachine: Using API Version  1
	I0318 22:40:56.492198   14794 main.go:141] libmachine: (addons-935788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:69:18", ip: ""} in network mk-addons-935788: {Iface:virbr1 ExpiryTime:2024-03-18 23:40:17 +0000 UTC Type:0 Mac:52:54:00:f2:69:18 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:addons-935788 Clientid:01:52:54:00:f2:69:18}
	I0318 22:40:56.492218   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined IP address 192.168.39.13 and MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:56.492230   14794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:40:56.492375   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHPort
	I0318 22:40:56.492571   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHKeyPath
	I0318 22:40:56.492686   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHUsername
	I0318 22:40:56.492808   14794 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17786-6465/.minikube/machines/addons-935788/id_rsa Username:docker}
	I0318 22:40:56.493086   14794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34401
	I0318 22:40:56.493424   14794 addons.go:234] Setting addon default-storageclass=true in "addons-935788"
	I0318 22:40:56.493458   14794 host.go:66] Checking if "addons-935788" exists ...
	I0318 22:40:56.493462   14794 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:40:56.493817   14794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:40:56.493849   14794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:40:56.494169   14794 main.go:141] libmachine: Using API Version  1
	I0318 22:40:56.494183   14794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:40:56.494573   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:56.494577   14794 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:40:56.494576   14794 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:40:56.494740   14794 main.go:141] libmachine: (addons-935788) Calling .DriverName
	I0318 22:40:56.494833   14794 main.go:141] libmachine: (addons-935788) Calling .GetState
	I0318 22:40:56.495059   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:56.495474   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:56.496114   14794 main.go:141] libmachine: (addons-935788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:69:18", ip: ""} in network mk-addons-935788: {Iface:virbr1 ExpiryTime:2024-03-18 23:40:17 +0000 UTC Type:0 Mac:52:54:00:f2:69:18 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:addons-935788 Clientid:01:52:54:00:f2:69:18}
	I0318 22:40:56.496136   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined IP address 192.168.39.13 and MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:56.496349   14794 main.go:141] libmachine: (addons-935788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:69:18", ip: ""} in network mk-addons-935788: {Iface:virbr1 ExpiryTime:2024-03-18 23:40:17 +0000 UTC Type:0 Mac:52:54:00:f2:69:18 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:addons-935788 Clientid:01:52:54:00:f2:69:18}
	I0318 22:40:56.496419   14794 main.go:141] libmachine: (addons-935788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:69:18", ip: ""} in network mk-addons-935788: {Iface:virbr1 ExpiryTime:2024-03-18 23:40:17 +0000 UTC Type:0 Mac:52:54:00:f2:69:18 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:addons-935788 Clientid:01:52:54:00:f2:69:18}
	I0318 22:40:56.496437   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined IP address 192.168.39.13 and MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:56.496592   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHPort
	I0318 22:40:56.496809   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHKeyPath
	I0318 22:40:56.496864   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined IP address 192.168.39.13 and MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:56.496888   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHPort
	I0318 22:40:56.497394   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHKeyPath
	I0318 22:40:56.497550   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHUsername
	I0318 22:40:56.497684   14794 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17786-6465/.minikube/machines/addons-935788/id_rsa Username:docker}
	I0318 22:40:56.497936   14794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40219
	I0318 22:40:56.498102   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHUsername
	I0318 22:40:56.498188   14794 main.go:141] libmachine: (addons-935788) Calling .DriverName
	I0318 22:40:56.498234   14794 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17786-6465/.minikube/machines/addons-935788/id_rsa Username:docker}
	I0318 22:40:56.500893   14794 out.go:177]   - Using image docker.io/registry:2.8.3
	I0318 22:40:56.498459   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHPort
	I0318 22:40:56.498488   14794 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:40:56.499030   14794 main.go:141] libmachine: (addons-935788) Calling .DriverName
	I0318 22:40:56.499338   14794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46119
	I0318 22:40:56.503106   14794 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0318 22:40:56.502162   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHKeyPath
	I0318 22:40:56.502420   14794 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:40:56.502534   14794 main.go:141] libmachine: Using API Version  1
	I0318 22:40:56.504426   14794 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0318 22:40:56.505410   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0318 22:40:56.505427   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHHostname
	I0318 22:40:56.505442   14794 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 22:40:56.506643   14794 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 22:40:56.506660   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 22:40:56.504521   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHUsername
	I0318 22:40:56.506675   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHHostname
	I0318 22:40:56.505478   14794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:40:56.505802   14794 main.go:141] libmachine: Using API Version  1
	I0318 22:40:56.506751   14794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:40:56.506837   14794 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17786-6465/.minikube/machines/addons-935788/id_rsa Username:docker}
	I0318 22:40:56.507572   14794 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:40:56.507761   14794 main.go:141] libmachine: (addons-935788) Calling .GetState
	I0318 22:40:56.508021   14794 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:40:56.508293   14794 main.go:141] libmachine: (addons-935788) Calling .GetState
	I0318 22:40:56.508559   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:56.508862   14794 main.go:141] libmachine: (addons-935788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:69:18", ip: ""} in network mk-addons-935788: {Iface:virbr1 ExpiryTime:2024-03-18 23:40:17 +0000 UTC Type:0 Mac:52:54:00:f2:69:18 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:addons-935788 Clientid:01:52:54:00:f2:69:18}
	I0318 22:40:56.508882   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined IP address 192.168.39.13 and MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:56.509081   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHPort
	I0318 22:40:56.509270   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHKeyPath
	I0318 22:40:56.509439   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHUsername
	I0318 22:40:56.509581   14794 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17786-6465/.minikube/machines/addons-935788/id_rsa Username:docker}
	I0318 22:40:56.510773   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:56.510801   14794 main.go:141] libmachine: (addons-935788) Calling .DriverName
	I0318 22:40:56.510987   14794 main.go:141] libmachine: (addons-935788) Calling .DriverName
	I0318 22:40:56.512248   14794 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0318 22:40:56.511302   14794 main.go:141] libmachine: (addons-935788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:69:18", ip: ""} in network mk-addons-935788: {Iface:virbr1 ExpiryTime:2024-03-18 23:40:17 +0000 UTC Type:0 Mac:52:54:00:f2:69:18 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:addons-935788 Clientid:01:52:54:00:f2:69:18}
	I0318 22:40:56.511495   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHPort
	I0318 22:40:56.513731   14794 out.go:177]   - Using image docker.io/busybox:stable
	I0318 22:40:56.513751   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined IP address 192.168.39.13 and MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:56.513810   14794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35321
	I0318 22:40:56.514858   14794 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0318 22:40:56.515920   14794 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0318 22:40:56.515931   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0318 22:40:56.515941   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHHostname
	I0318 22:40:56.515015   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHKeyPath
	I0318 22:40:56.515236   14794 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:40:56.517296   14794 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0318 22:40:56.517305   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0318 22:40:56.517318   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHHostname
	I0318 22:40:56.518360   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHUsername
	I0318 22:40:56.518421   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:56.518551   14794 main.go:141] libmachine: Using API Version  1
	I0318 22:40:56.518570   14794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:40:56.518555   14794 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17786-6465/.minikube/machines/addons-935788/id_rsa Username:docker}
	I0318 22:40:56.518740   14794 main.go:141] libmachine: (addons-935788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:69:18", ip: ""} in network mk-addons-935788: {Iface:virbr1 ExpiryTime:2024-03-18 23:40:17 +0000 UTC Type:0 Mac:52:54:00:f2:69:18 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:addons-935788 Clientid:01:52:54:00:f2:69:18}
	I0318 22:40:56.518793   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined IP address 192.168.39.13 and MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:56.518865   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHPort
	I0318 22:40:56.518960   14794 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:40:56.519083   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHKeyPath
	I0318 22:40:56.519296   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHUsername
	I0318 22:40:56.519509   14794 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17786-6465/.minikube/machines/addons-935788/id_rsa Username:docker}
	I0318 22:40:56.519689   14794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:40:56.519725   14794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:40:56.520793   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:56.521119   14794 main.go:141] libmachine: (addons-935788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:69:18", ip: ""} in network mk-addons-935788: {Iface:virbr1 ExpiryTime:2024-03-18 23:40:17 +0000 UTC Type:0 Mac:52:54:00:f2:69:18 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:addons-935788 Clientid:01:52:54:00:f2:69:18}
	I0318 22:40:56.521141   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined IP address 192.168.39.13 and MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:56.521276   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHPort
	I0318 22:40:56.521434   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHKeyPath
	I0318 22:40:56.521569   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHUsername
	I0318 22:40:56.521695   14794 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17786-6465/.minikube/machines/addons-935788/id_rsa Username:docker}
	W0318 22:40:56.547595   14794 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:40684->192.168.39.13:22: read: connection reset by peer
	I0318 22:40:56.547622   14794 retry.go:31] will retry after 191.591118ms: ssh: handshake failed: read tcp 192.168.39.1:40684->192.168.39.13:22: read: connection reset by peer
	I0318 22:40:56.559076   14794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34993
	I0318 22:40:56.559365   14794 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:40:56.559771   14794 main.go:141] libmachine: Using API Version  1
	I0318 22:40:56.559794   14794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:40:56.560088   14794 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:40:56.560257   14794 main.go:141] libmachine: (addons-935788) Calling .GetState
	I0318 22:40:56.561768   14794 main.go:141] libmachine: (addons-935788) Calling .DriverName
	I0318 22:40:56.561975   14794 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 22:40:56.561988   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 22:40:56.562001   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHHostname
	I0318 22:40:56.564234   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:56.564633   14794 main.go:141] libmachine: (addons-935788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:69:18", ip: ""} in network mk-addons-935788: {Iface:virbr1 ExpiryTime:2024-03-18 23:40:17 +0000 UTC Type:0 Mac:52:54:00:f2:69:18 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:addons-935788 Clientid:01:52:54:00:f2:69:18}
	I0318 22:40:56.564662   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined IP address 192.168.39.13 and MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:40:56.564788   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHPort
	I0318 22:40:56.564937   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHKeyPath
	I0318 22:40:56.565102   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHUsername
	I0318 22:40:56.565237   14794 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17786-6465/.minikube/machines/addons-935788/id_rsa Username:docker}
	W0318 22:40:56.566782   14794 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:40696->192.168.39.13:22: read: connection reset by peer
	I0318 22:40:56.566801   14794 retry.go:31] will retry after 176.610281ms: ssh: handshake failed: read tcp 192.168.39.1:40696->192.168.39.13:22: read: connection reset by peer
	I0318 22:40:57.022988   14794 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0318 22:40:57.023005   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0318 22:40:57.124807   14794 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0318 22:40:57.191852   14794 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0318 22:40:57.191873   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0318 22:40:57.195156   14794 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0318 22:40:57.195172   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0318 22:40:57.195604   14794 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0318 22:40:57.262758   14794 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 22:40:57.262779   14794 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0318 22:40:57.376205   14794 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0318 22:40:57.376225   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0318 22:40:57.388862   14794 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 22:40:57.421085   14794 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 22:40:57.421109   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0318 22:40:57.670533   14794 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0318 22:40:57.670559   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0318 22:40:57.683792   14794 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0318 22:40:57.683812   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0318 22:40:57.691485   14794 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0318 22:40:57.691503   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0318 22:40:57.733942   14794 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0318 22:40:57.742464   14794 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0318 22:40:57.742484   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0318 22:40:57.748860   14794 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0318 22:40:57.767820   14794 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 22:40:57.771827   14794 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 22:40:57.771848   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 22:40:57.793665   14794 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0318 22:40:57.797028   14794 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0318 22:40:57.797048   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0318 22:40:57.827673   14794 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0318 22:40:57.827693   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0318 22:40:57.991493   14794 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0318 22:40:57.991528   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0318 22:40:58.105723   14794 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0318 22:40:58.105750   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0318 22:40:58.125172   14794 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0318 22:40:58.125195   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0318 22:40:58.146627   14794 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0318 22:40:58.146645   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0318 22:40:58.204014   14794 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0318 22:40:58.204031   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0318 22:40:58.213743   14794 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0318 22:40:58.213767   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0318 22:40:58.223686   14794 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0318 22:40:58.233690   14794 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 22:40:58.233708   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 22:40:58.255954   14794 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0318 22:40:58.255980   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0318 22:40:58.288076   14794 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0318 22:40:58.288097   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0318 22:40:58.337293   14794 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0318 22:40:58.443113   14794 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0318 22:40:58.446646   14794 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 22:40:58.448031   14794 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0318 22:40:58.448052   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0318 22:40:58.475142   14794 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0318 22:40:58.475159   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0318 22:40:58.476438   14794 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0318 22:40:58.476453   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0318 22:40:58.704868   14794 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0318 22:40:58.704889   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0318 22:40:58.835523   14794 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0318 22:40:58.835543   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0318 22:40:58.842466   14794 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0318 22:40:58.842483   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0318 22:40:59.119889   14794 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0318 22:40:59.119916   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0318 22:40:59.120288   14794 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0318 22:40:59.120303   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0318 22:40:59.139599   14794 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0318 22:40:59.281941   14794 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0318 22:40:59.281965   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0318 22:40:59.354897   14794 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0318 22:40:59.354922   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0318 22:40:59.532844   14794 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0318 22:40:59.607641   14794 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0318 22:40:59.607662   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0318 22:40:59.776760   14794 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0318 22:40:59.776785   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0318 22:40:59.966856   14794 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0318 22:41:00.779929   14794 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.655086607s)
	I0318 22:41:00.779979   14794 main.go:141] libmachine: Making call to close driver server
	I0318 22:41:00.779989   14794 main.go:141] libmachine: (addons-935788) Calling .Close
	I0318 22:41:00.780282   14794 main.go:141] libmachine: (addons-935788) DBG | Closing plugin on server side
	I0318 22:41:00.780418   14794 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:41:00.780438   14794 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:41:00.780458   14794 main.go:141] libmachine: Making call to close driver server
	I0318 22:41:00.780471   14794 main.go:141] libmachine: (addons-935788) Calling .Close
	I0318 22:41:00.780866   14794 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:41:00.780874   14794 main.go:141] libmachine: (addons-935788) DBG | Closing plugin on server side
	I0318 22:41:00.780893   14794 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:41:01.423851   14794 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.161059942s)
	I0318 22:41:01.423906   14794 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.161099398s)
	I0318 22:41:01.423933   14794 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0318 22:41:01.423851   14794 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.228224275s)
	I0318 22:41:01.424094   14794 main.go:141] libmachine: Making call to close driver server
	I0318 22:41:01.424109   14794 main.go:141] libmachine: (addons-935788) Calling .Close
	I0318 22:41:01.424335   14794 main.go:141] libmachine: (addons-935788) DBG | Closing plugin on server side
	I0318 22:41:01.424345   14794 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:41:01.424359   14794 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:41:01.424368   14794 main.go:141] libmachine: Making call to close driver server
	I0318 22:41:01.424395   14794 main.go:141] libmachine: (addons-935788) Calling .Close
	I0318 22:41:01.424586   14794 main.go:141] libmachine: (addons-935788) DBG | Closing plugin on server side
	I0318 22:41:01.424613   14794 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:41:01.424625   14794 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:41:01.424911   14794 node_ready.go:35] waiting up to 6m0s for node "addons-935788" to be "Ready" ...
	I0318 22:41:01.438154   14794 node_ready.go:49] node "addons-935788" has status "Ready":"True"
	I0318 22:41:01.438175   14794 node_ready.go:38] duration metric: took 13.24156ms for node "addons-935788" to be "Ready" ...
	I0318 22:41:01.438185   14794 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:41:01.467122   14794 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-bksbc" in "kube-system" namespace to be "Ready" ...
	I0318 22:41:01.974938   14794 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-935788" context rescaled to 1 replicas
	I0318 22:41:02.275923   14794 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.88703449s)
	I0318 22:41:02.275976   14794 main.go:141] libmachine: Making call to close driver server
	I0318 22:41:02.275989   14794 main.go:141] libmachine: (addons-935788) Calling .Close
	I0318 22:41:02.276265   14794 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:41:02.276289   14794 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:41:02.276298   14794 main.go:141] libmachine: Making call to close driver server
	I0318 22:41:02.276306   14794 main.go:141] libmachine: (addons-935788) Calling .Close
	I0318 22:41:02.276597   14794 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:41:02.276614   14794 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:41:03.361668   14794 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0318 22:41:03.361706   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHHostname
	I0318 22:41:03.364547   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:41:03.364961   14794 main.go:141] libmachine: (addons-935788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:69:18", ip: ""} in network mk-addons-935788: {Iface:virbr1 ExpiryTime:2024-03-18 23:40:17 +0000 UTC Type:0 Mac:52:54:00:f2:69:18 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:addons-935788 Clientid:01:52:54:00:f2:69:18}
	I0318 22:41:03.364994   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined IP address 192.168.39.13 and MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:41:03.365125   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHPort
	I0318 22:41:03.365313   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHKeyPath
	I0318 22:41:03.365478   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHUsername
	I0318 22:41:03.365617   14794 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17786-6465/.minikube/machines/addons-935788/id_rsa Username:docker}
	I0318 22:41:03.479322   14794 pod_ready.go:102] pod "coredns-76f75df574-bksbc" in "kube-system" namespace has status "Ready":"False"
	I0318 22:41:03.930547   14794 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0318 22:41:04.153899   14794 addons.go:234] Setting addon gcp-auth=true in "addons-935788"
	I0318 22:41:04.153971   14794 host.go:66] Checking if "addons-935788" exists ...
	I0318 22:41:04.154365   14794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:41:04.154398   14794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:41:04.169690   14794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43651
	I0318 22:41:04.170102   14794 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:41:04.170535   14794 main.go:141] libmachine: Using API Version  1
	I0318 22:41:04.170560   14794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:41:04.170817   14794 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:41:04.171303   14794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:41:04.171330   14794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:41:04.185403   14794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44149
	I0318 22:41:04.185732   14794 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:41:04.186148   14794 main.go:141] libmachine: Using API Version  1
	I0318 22:41:04.186170   14794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:41:04.186478   14794 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:41:04.186699   14794 main.go:141] libmachine: (addons-935788) Calling .GetState
	I0318 22:41:04.188169   14794 main.go:141] libmachine: (addons-935788) Calling .DriverName
	I0318 22:41:04.188390   14794 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0318 22:41:04.188416   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHHostname
	I0318 22:41:04.191140   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:41:04.191537   14794 main.go:141] libmachine: (addons-935788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:69:18", ip: ""} in network mk-addons-935788: {Iface:virbr1 ExpiryTime:2024-03-18 23:40:17 +0000 UTC Type:0 Mac:52:54:00:f2:69:18 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:addons-935788 Clientid:01:52:54:00:f2:69:18}
	I0318 22:41:04.191569   14794 main.go:141] libmachine: (addons-935788) DBG | domain addons-935788 has defined IP address 192.168.39.13 and MAC address 52:54:00:f2:69:18 in network mk-addons-935788
	I0318 22:41:04.191723   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHPort
	I0318 22:41:04.191890   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHKeyPath
	I0318 22:41:04.192048   14794 main.go:141] libmachine: (addons-935788) Calling .GetSSHUsername
	I0318 22:41:04.192219   14794 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17786-6465/.minikube/machines/addons-935788/id_rsa Username:docker}
	I0318 22:41:05.506649   14794 pod_ready.go:102] pod "coredns-76f75df574-bksbc" in "kube-system" namespace has status "Ready":"False"
	I0318 22:41:05.667531   14794 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.91863272s)
	I0318 22:41:05.667568   14794 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.899716738s)
	I0318 22:41:05.667584   14794 main.go:141] libmachine: Making call to close driver server
	I0318 22:41:05.667598   14794 main.go:141] libmachine: (addons-935788) Calling .Close
	I0318 22:41:05.667609   14794 main.go:141] libmachine: Making call to close driver server
	I0318 22:41:05.667628   14794 main.go:141] libmachine: (addons-935788) Calling .Close
	I0318 22:41:05.667629   14794 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.873939746s)
	I0318 22:41:05.667661   14794 main.go:141] libmachine: Making call to close driver server
	I0318 22:41:05.667677   14794 main.go:141] libmachine: (addons-935788) Calling .Close
	I0318 22:41:05.667709   14794 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.443994476s)
	I0318 22:41:05.667729   14794 main.go:141] libmachine: Making call to close driver server
	I0318 22:41:05.667741   14794 main.go:141] libmachine: (addons-935788) Calling .Close
	I0318 22:41:05.667757   14794 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.330438234s)
	I0318 22:41:05.667770   14794 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.933787973s)
	I0318 22:41:05.667803   14794 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.224660757s)
	I0318 22:41:05.667819   14794 main.go:141] libmachine: Making call to close driver server
	I0318 22:41:05.667828   14794 main.go:141] libmachine: Making call to close driver server
	I0318 22:41:05.667836   14794 main.go:141] libmachine: (addons-935788) Calling .Close
	I0318 22:41:05.667845   14794 main.go:141] libmachine: (addons-935788) Calling .Close
	I0318 22:41:05.667773   14794 main.go:141] libmachine: Making call to close driver server
	I0318 22:41:05.667894   14794 main.go:141] libmachine: (addons-935788) Calling .Close
	I0318 22:41:05.667956   14794 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.221286194s)
	I0318 22:41:05.667974   14794 main.go:141] libmachine: Making call to close driver server
	I0318 22:41:05.667983   14794 main.go:141] libmachine: (addons-935788) Calling .Close
	I0318 22:41:05.668108   14794 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.52847653s)
	W0318 22:41:05.668134   14794 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0318 22:41:05.668153   14794 retry.go:31] will retry after 301.62797ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0318 22:41:05.668177   14794 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.135302514s)
	I0318 22:41:05.668196   14794 main.go:141] libmachine: Making call to close driver server
	I0318 22:41:05.668205   14794 main.go:141] libmachine: (addons-935788) Calling .Close
	I0318 22:41:05.670023   14794 main.go:141] libmachine: (addons-935788) DBG | Closing plugin on server side
	I0318 22:41:05.670040   14794 main.go:141] libmachine: (addons-935788) DBG | Closing plugin on server side
	I0318 22:41:05.670039   14794 main.go:141] libmachine: (addons-935788) DBG | Closing plugin on server side
	I0318 22:41:05.670048   14794 main.go:141] libmachine: (addons-935788) DBG | Closing plugin on server side
	I0318 22:41:05.670063   14794 main.go:141] libmachine: (addons-935788) DBG | Closing plugin on server side
	I0318 22:41:05.670065   14794 main.go:141] libmachine: (addons-935788) DBG | Closing plugin on server side
	I0318 22:41:05.670086   14794 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:41:05.670095   14794 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:41:05.670098   14794 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:41:05.670098   14794 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:41:05.670104   14794 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:41:05.670108   14794 main.go:141] libmachine: Making call to close driver server
	I0318 22:41:05.670112   14794 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:41:05.670116   14794 main.go:141] libmachine: (addons-935788) Calling .Close
	I0318 22:41:05.670120   14794 main.go:141] libmachine: Making call to close driver server
	I0318 22:41:05.670127   14794 main.go:141] libmachine: (addons-935788) Calling .Close
	I0318 22:41:05.670131   14794 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:41:05.670139   14794 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:41:05.670146   14794 main.go:141] libmachine: Making call to close driver server
	I0318 22:41:05.670152   14794 main.go:141] libmachine: (addons-935788) Calling .Close
	I0318 22:41:05.670112   14794 main.go:141] libmachine: Making call to close driver server
	I0318 22:41:05.670166   14794 main.go:141] libmachine: (addons-935788) DBG | Closing plugin on server side
	I0318 22:41:05.670169   14794 main.go:141] libmachine: (addons-935788) Calling .Close
	I0318 22:41:05.670188   14794 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:41:05.670194   14794 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:41:05.670201   14794 main.go:141] libmachine: Making call to close driver server
	I0318 22:41:05.670207   14794 main.go:141] libmachine: (addons-935788) Calling .Close
	I0318 22:41:05.670249   14794 main.go:141] libmachine: (addons-935788) DBG | Closing plugin on server side
	I0318 22:41:05.670264   14794 main.go:141] libmachine: (addons-935788) DBG | Closing plugin on server side
	I0318 22:41:05.670284   14794 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:41:05.670291   14794 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:41:05.670298   14794 main.go:141] libmachine: Making call to close driver server
	I0318 22:41:05.670305   14794 main.go:141] libmachine: (addons-935788) Calling .Close
	I0318 22:41:05.670353   14794 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:41:05.670360   14794 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:41:05.670367   14794 main.go:141] libmachine: Making call to close driver server
	I0318 22:41:05.670373   14794 main.go:141] libmachine: (addons-935788) Calling .Close
	I0318 22:41:05.670088   14794 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:41:05.670412   14794 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:41:05.670420   14794 main.go:141] libmachine: Making call to close driver server
	I0318 22:41:05.670425   14794 main.go:141] libmachine: (addons-935788) Calling .Close
	I0318 22:41:05.671118   14794 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:41:05.671125   14794 main.go:141] libmachine: (addons-935788) DBG | Closing plugin on server side
	I0318 22:41:05.671133   14794 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:41:05.671142   14794 main.go:141] libmachine: Making call to close driver server
	I0318 22:41:05.671150   14794 main.go:141] libmachine: (addons-935788) DBG | Closing plugin on server side
	I0318 22:41:05.671151   14794 main.go:141] libmachine: (addons-935788) Calling .Close
	I0318 22:41:05.671214   14794 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:41:05.671222   14794 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:41:05.672737   14794 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-935788 service yakd-dashboard -n yakd-dashboard
	
	I0318 22:41:05.671584   14794 main.go:141] libmachine: (addons-935788) DBG | Closing plugin on server side
	I0318 22:41:05.671608   14794 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:41:05.671626   14794 main.go:141] libmachine: (addons-935788) DBG | Closing plugin on server side
	I0318 22:41:05.671642   14794 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:41:05.671693   14794 main.go:141] libmachine: (addons-935788) DBG | Closing plugin on server side
	I0318 22:41:05.671708   14794 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:41:05.671732   14794 main.go:141] libmachine: (addons-935788) DBG | Closing plugin on server side
	I0318 22:41:05.671846   14794 main.go:141] libmachine: (addons-935788) DBG | Closing plugin on server side
	I0318 22:41:05.671862   14794 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:41:05.671878   14794 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:41:05.671893   14794 main.go:141] libmachine: (addons-935788) DBG | Closing plugin on server side
	I0318 22:41:05.671907   14794 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:41:05.671938   14794 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:41:05.674060   14794 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:41:05.674075   14794 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:41:05.674084   14794 addons.go:470] Verifying addon registry=true in "addons-935788"
	I0318 22:41:05.675514   14794 out.go:177] * Verifying registry addon...
	I0318 22:41:05.674294   14794 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:41:05.674303   14794 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:41:05.674313   14794 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:41:05.674322   14794 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:41:05.674359   14794 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:41:05.674448   14794 main.go:141] libmachine: (addons-935788) DBG | Closing plugin on server side
	I0318 22:41:05.674457   14794 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:41:05.675551   14794 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:41:05.675568   14794 addons.go:470] Verifying addon ingress=true in "addons-935788"
	I0318 22:41:05.676887   14794 out.go:177] * Verifying ingress addon...
	I0318 22:41:05.675634   14794 addons.go:470] Verifying addon metrics-server=true in "addons-935788"
	I0318 22:41:05.678641   14794 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0318 22:41:05.679655   14794 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0318 22:41:05.702771   14794 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0318 22:41:05.702790   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:05.712189   14794 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0318 22:41:05.712209   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:05.742969   14794 main.go:141] libmachine: Making call to close driver server
	I0318 22:41:05.742989   14794 main.go:141] libmachine: (addons-935788) Calling .Close
	I0318 22:41:05.743221   14794 main.go:141] libmachine: (addons-935788) DBG | Closing plugin on server side
	I0318 22:41:05.743252   14794 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:41:05.743264   14794 main.go:141] libmachine: Making call to close connection to plugin binary
	W0318 22:41:05.743389   14794 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0318 22:41:05.748515   14794 main.go:141] libmachine: Making call to close driver server
	I0318 22:41:05.748531   14794 main.go:141] libmachine: (addons-935788) Calling .Close
	I0318 22:41:05.748771   14794 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:41:05.748792   14794 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:41:05.748792   14794 main.go:141] libmachine: (addons-935788) DBG | Closing plugin on server side
	I0318 22:41:05.970099   14794 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0318 22:41:06.190436   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:06.194786   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:06.628208   14794 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.439795291s)
	I0318 22:41:06.629709   14794 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0318 22:41:06.631262   14794 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0318 22:41:06.629041   14794 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.66213415s)
	I0318 22:41:06.632605   14794 main.go:141] libmachine: Making call to close driver server
	I0318 22:41:06.632622   14794 main.go:141] libmachine: (addons-935788) Calling .Close
	I0318 22:41:06.632636   14794 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0318 22:41:06.632653   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0318 22:41:06.632889   14794 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:41:06.632928   14794 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:41:06.632941   14794 main.go:141] libmachine: Making call to close driver server
	I0318 22:41:06.632950   14794 main.go:141] libmachine: (addons-935788) Calling .Close
	I0318 22:41:06.632896   14794 main.go:141] libmachine: (addons-935788) DBG | Closing plugin on server side
	I0318 22:41:06.633258   14794 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:41:06.633273   14794 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:41:06.633283   14794 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-935788"
	I0318 22:41:06.634568   14794 out.go:177] * Verifying csi-hostpath-driver addon...
	I0318 22:41:06.636448   14794 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0318 22:41:06.661768   14794 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0318 22:41:06.661793   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:06.705909   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:06.712491   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:06.764851   14794 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0318 22:41:06.764873   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0318 22:41:06.854358   14794 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0318 22:41:06.854384   14794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0318 22:41:06.955532   14794 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0318 22:41:07.148045   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:07.184785   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:07.184786   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:07.644839   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:07.685537   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:07.686498   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:07.972989   14794 pod_ready.go:102] pod "coredns-76f75df574-bksbc" in "kube-system" namespace has status "Ready":"False"
	I0318 22:41:08.078782   14794 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.108638927s)
	I0318 22:41:08.078839   14794 main.go:141] libmachine: Making call to close driver server
	I0318 22:41:08.078853   14794 main.go:141] libmachine: (addons-935788) Calling .Close
	I0318 22:41:08.079208   14794 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:41:08.079248   14794 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:41:08.079269   14794 main.go:141] libmachine: (addons-935788) DBG | Closing plugin on server side
	I0318 22:41:08.079276   14794 main.go:141] libmachine: Making call to close driver server
	I0318 22:41:08.079360   14794 main.go:141] libmachine: (addons-935788) Calling .Close
	I0318 22:41:08.079552   14794 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:41:08.079577   14794 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:41:08.142056   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:08.187100   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:08.195696   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:08.427683   14794 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.472111366s)
	I0318 22:41:08.427726   14794 main.go:141] libmachine: Making call to close driver server
	I0318 22:41:08.427736   14794 main.go:141] libmachine: (addons-935788) Calling .Close
	I0318 22:41:08.428005   14794 main.go:141] libmachine: (addons-935788) DBG | Closing plugin on server side
	I0318 22:41:08.428013   14794 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:41:08.428029   14794 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:41:08.428038   14794 main.go:141] libmachine: Making call to close driver server
	I0318 22:41:08.428047   14794 main.go:141] libmachine: (addons-935788) Calling .Close
	I0318 22:41:08.428297   14794 main.go:141] libmachine: (addons-935788) DBG | Closing plugin on server side
	I0318 22:41:08.428328   14794 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:41:08.428339   14794 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:41:08.429658   14794 addons.go:470] Verifying addon gcp-auth=true in "addons-935788"
	I0318 22:41:08.431200   14794 out.go:177] * Verifying gcp-auth addon...
	I0318 22:41:08.433337   14794 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0318 22:41:08.452861   14794 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0318 22:41:08.452876   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:08.664429   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:08.748843   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:08.753211   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:08.937984   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:09.142747   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:09.187627   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:09.187982   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:09.437567   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:09.472904   14794 pod_ready.go:97] pod "coredns-76f75df574-bksbc" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-18 22:41:09 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-18 22:40:56 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-18 22:40:56 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-18 22:40:56 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-18 22:40:56 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.13 HostIPs:[{IP:192.168.39.
13}] PodIP: PodIPs:[] StartTime:2024-03-18 22:40:56 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-03-18 22:40:58 +0000 UTC,FinishedAt:2024-03-18 22:41:08 +0000 UTC,ContainerID:containerd://d5ef64aef4ae5e5333d7c883cb31a843199f68b5b7615e5cb0f6066ac7d79531,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:containerd://d5ef64aef4ae5e5333d7c883cb31a843199f68b5b7615e5cb0f6066ac7d79531 Started:0xc0030138c0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0318 22:41:09.472930   14794 pod_ready.go:81] duration metric: took 8.005785372s for pod "coredns-76f75df574-bksbc" in "kube-system" namespace to be "Ready" ...
	E0318 22:41:09.472939   14794 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-76f75df574-bksbc" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-18 22:41:09 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-18 22:40:56 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-18 22:40:56 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-18 22:40:56 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-18 22:40:56 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.13 HostIPs:[{IP:192.168.39.13}] PodIP: PodIPs:[] StartTime:2024-03-18 22:40:56 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-03-18 22:40:58 +0000 UTC,FinishedAt:2024-03-18 22:41:08 +0000 UTC,ContainerID:containerd://d5ef64aef4ae5e5333d7c883cb31a843199f68b5b7615e5cb0f6066ac7d79531,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:containerd://d5ef64aef4ae5e5333d7c883cb31a843199f68b5b7615e5cb0f6066ac7d79531 Started:0xc0030138c0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0318 22:41:09.472946   14794 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-ntqfz" in "kube-system" namespace to be "Ready" ...
	I0318 22:41:09.642864   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:09.687274   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:09.687599   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:09.937720   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:10.143015   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:10.186001   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:10.189374   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:10.437491   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:10.643890   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:10.692978   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:10.695334   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:10.937261   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:11.141818   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:11.185305   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:11.185753   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:11.437260   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:11.478477   14794 pod_ready.go:102] pod "coredns-76f75df574-ntqfz" in "kube-system" namespace has status "Ready":"False"
	I0318 22:41:11.642307   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:11.683691   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:11.686844   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:11.937848   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:12.143627   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:12.184892   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:12.187167   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:12.661271   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:12.662235   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:12.683860   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:12.685815   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:12.936377   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:13.146329   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:13.185438   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:13.185639   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:13.437398   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:13.482668   14794 pod_ready.go:102] pod "coredns-76f75df574-ntqfz" in "kube-system" namespace has status "Ready":"False"
	I0318 22:41:13.643385   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:13.683933   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:13.684849   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:13.937397   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:14.142940   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:14.186547   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:14.187981   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:14.436995   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:14.643369   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:14.684915   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:14.685264   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:14.938386   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:15.141908   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:15.183321   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:15.183836   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:15.436886   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:15.642426   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:15.684743   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:15.685834   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:15.938521   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:15.980343   14794 pod_ready.go:102] pod "coredns-76f75df574-ntqfz" in "kube-system" namespace has status "Ready":"False"
	I0318 22:41:16.142833   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:16.183840   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:16.186860   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:16.437043   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:16.641902   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:16.685057   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:16.686060   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:16.937455   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:17.146629   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:17.186453   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:17.189500   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:17.438021   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:17.644889   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:17.685664   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:17.685975   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:17.937022   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:18.144063   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:18.184505   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:18.185971   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:18.437274   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:18.480646   14794 pod_ready.go:102] pod "coredns-76f75df574-ntqfz" in "kube-system" namespace has status "Ready":"False"
	I0318 22:41:18.643711   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:18.686056   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:18.686062   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:18.937930   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:19.148529   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:19.188309   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:19.188804   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:19.437842   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:19.645804   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:19.686170   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:19.686177   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:19.937368   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:20.143609   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:20.186922   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:20.187195   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:20.439514   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:20.642895   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:20.685537   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:20.686100   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:20.936841   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:20.978909   14794 pod_ready.go:102] pod "coredns-76f75df574-ntqfz" in "kube-system" namespace has status "Ready":"False"
	I0318 22:41:21.141706   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:21.185010   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:21.186739   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:21.436193   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:21.643510   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:21.687216   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:21.687740   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:21.937519   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:22.142671   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:22.185299   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:22.185819   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:22.437714   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:22.643110   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:22.685799   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:22.688636   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:22.937496   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:22.978946   14794 pod_ready.go:102] pod "coredns-76f75df574-ntqfz" in "kube-system" namespace has status "Ready":"False"
	I0318 22:41:23.142876   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:23.183413   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:23.184840   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:23.438205   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:23.642800   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:23.684455   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:23.685920   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:23.938347   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:24.143644   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:24.184202   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:24.186885   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:24.438005   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:24.644797   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:24.686335   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:24.686864   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:24.937254   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:24.979689   14794 pod_ready.go:102] pod "coredns-76f75df574-ntqfz" in "kube-system" namespace has status "Ready":"False"
	I0318 22:41:25.142176   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:25.187404   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:25.187520   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:25.436670   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:25.642186   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:25.690604   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:25.691793   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:25.938765   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:26.143491   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:26.186531   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:26.189319   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:26.437342   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:26.644744   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:26.689644   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:26.691161   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:26.937328   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:27.143530   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:27.184503   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:27.186193   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:27.437923   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:27.479190   14794 pod_ready.go:102] pod "coredns-76f75df574-ntqfz" in "kube-system" namespace has status "Ready":"False"
	I0318 22:41:27.643373   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:27.689391   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:27.691921   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:27.936944   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:28.144117   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:28.194721   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:28.195826   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:28.437046   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:28.643283   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:28.691029   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:28.691193   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:28.937052   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:29.144960   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:29.186099   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:29.186675   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:29.437947   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:29.481785   14794 pod_ready.go:102] pod "coredns-76f75df574-ntqfz" in "kube-system" namespace has status "Ready":"False"
	I0318 22:41:29.642992   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:29.686131   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:29.686132   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:29.937603   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:30.143427   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:30.184228   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:30.184725   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:30.436977   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:30.646494   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:30.684330   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:30.684550   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:30.937315   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:31.142472   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:31.184151   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:31.184469   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:31.437359   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:31.642376   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:31.685035   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:31.688156   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:31.936875   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:31.982641   14794 pod_ready.go:102] pod "coredns-76f75df574-ntqfz" in "kube-system" namespace has status "Ready":"False"
	I0318 22:41:32.144036   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:32.183562   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:32.184293   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:32.438184   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:32.642293   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:32.684262   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:32.688093   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:32.937266   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:33.142403   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:33.185155   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:33.185269   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:33.436818   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:33.643205   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:33.684532   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:33.684892   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:33.937678   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:34.143526   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:34.184597   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:34.186272   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:34.437552   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:34.481838   14794 pod_ready.go:102] pod "coredns-76f75df574-ntqfz" in "kube-system" namespace has status "Ready":"False"
	I0318 22:41:34.643532   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:34.684162   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:34.684844   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:34.937582   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:35.143527   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:35.187010   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:35.188132   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:35.437814   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:35.644000   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:35.683416   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:35.684999   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:35.937194   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:36.174678   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:36.184603   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:36.185537   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:36.436992   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:36.647452   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:36.685793   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:36.686477   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:36.936487   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:36.979211   14794 pod_ready.go:92] pod "coredns-76f75df574-ntqfz" in "kube-system" namespace has status "Ready":"True"
	I0318 22:41:36.979244   14794 pod_ready.go:81] duration metric: took 27.50628943s for pod "coredns-76f75df574-ntqfz" in "kube-system" namespace to be "Ready" ...
	I0318 22:41:36.979265   14794 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-935788" in "kube-system" namespace to be "Ready" ...
	I0318 22:41:36.985534   14794 pod_ready.go:92] pod "etcd-addons-935788" in "kube-system" namespace has status "Ready":"True"
	I0318 22:41:36.985555   14794 pod_ready.go:81] duration metric: took 6.282363ms for pod "etcd-addons-935788" in "kube-system" namespace to be "Ready" ...
	I0318 22:41:36.985564   14794 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-935788" in "kube-system" namespace to be "Ready" ...
	I0318 22:41:36.990671   14794 pod_ready.go:92] pod "kube-apiserver-addons-935788" in "kube-system" namespace has status "Ready":"True"
	I0318 22:41:36.990691   14794 pod_ready.go:81] duration metric: took 5.120461ms for pod "kube-apiserver-addons-935788" in "kube-system" namespace to be "Ready" ...
	I0318 22:41:36.990698   14794 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-935788" in "kube-system" namespace to be "Ready" ...
	I0318 22:41:36.995121   14794 pod_ready.go:92] pod "kube-controller-manager-addons-935788" in "kube-system" namespace has status "Ready":"True"
	I0318 22:41:36.995139   14794 pod_ready.go:81] duration metric: took 4.434572ms for pod "kube-controller-manager-addons-935788" in "kube-system" namespace to be "Ready" ...
	I0318 22:41:36.995148   14794 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7l52v" in "kube-system" namespace to be "Ready" ...
	I0318 22:41:36.999546   14794 pod_ready.go:92] pod "kube-proxy-7l52v" in "kube-system" namespace has status "Ready":"True"
	I0318 22:41:36.999569   14794 pod_ready.go:81] duration metric: took 4.414375ms for pod "kube-proxy-7l52v" in "kube-system" namespace to be "Ready" ...
	I0318 22:41:36.999579   14794 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-935788" in "kube-system" namespace to be "Ready" ...
	I0318 22:41:37.143241   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:37.184272   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:37.185100   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:37.377220   14794 pod_ready.go:92] pod "kube-scheduler-addons-935788" in "kube-system" namespace has status "Ready":"True"
	I0318 22:41:37.377237   14794 pod_ready.go:81] duration metric: took 377.650739ms for pod "kube-scheduler-addons-935788" in "kube-system" namespace to be "Ready" ...
	I0318 22:41:37.377246   14794 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-mhwxb" in "kube-system" namespace to be "Ready" ...
	I0318 22:41:37.437250   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:37.644277   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:37.685470   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:37.686508   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:37.777419   14794 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-mhwxb" in "kube-system" namespace has status "Ready":"True"
	I0318 22:41:37.777440   14794 pod_ready.go:81] duration metric: took 400.186943ms for pod "nvidia-device-plugin-daemonset-mhwxb" in "kube-system" namespace to be "Ready" ...
	I0318 22:41:37.777450   14794 pod_ready.go:38] duration metric: took 36.339253687s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:41:37.777469   14794 api_server.go:52] waiting for apiserver process to appear ...
	I0318 22:41:37.777522   14794 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:41:37.796314   14794 api_server.go:72] duration metric: took 41.438187s to wait for apiserver process to appear ...
	I0318 22:41:37.796335   14794 api_server.go:88] waiting for apiserver healthz status ...
	I0318 22:41:37.796355   14794 api_server.go:253] Checking apiserver healthz at https://192.168.39.13:8443/healthz ...
	I0318 22:41:37.802243   14794 api_server.go:279] https://192.168.39.13:8443/healthz returned 200:
	ok
	I0318 22:41:37.803170   14794 api_server.go:141] control plane version: v1.29.3
	I0318 22:41:37.803189   14794 api_server.go:131] duration metric: took 6.846577ms to wait for apiserver health ...
	I0318 22:41:37.803198   14794 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 22:41:37.938330   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:37.985465   14794 system_pods.go:59] 18 kube-system pods found
	I0318 22:41:37.985494   14794 system_pods.go:61] "coredns-76f75df574-ntqfz" [3b0d2ae0-dfa9-4b9e-8f0e-eb3e0f8ec7c3] Running
	I0318 22:41:37.985502   14794 system_pods.go:61] "csi-hostpath-attacher-0" [36445c0d-c840-42e5-a64b-9070a5d60697] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0318 22:41:37.985509   14794 system_pods.go:61] "csi-hostpath-resizer-0" [e2c623f3-bb6d-41d4-824f-9f063e1ffd18] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0318 22:41:37.985517   14794 system_pods.go:61] "csi-hostpathplugin-c8frv" [bd13d515-c4e1-4d53-b4ae-a386eaf6c0cd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0318 22:41:37.985523   14794 system_pods.go:61] "etcd-addons-935788" [9a18611e-6039-4c4e-b173-0cbdae1d4916] Running
	I0318 22:41:37.985528   14794 system_pods.go:61] "kube-apiserver-addons-935788" [08694eaa-7cb8-4094-b347-627cdbb438fd] Running
	I0318 22:41:37.985532   14794 system_pods.go:61] "kube-controller-manager-addons-935788" [b3ac1b09-5de6-4dcb-932b-bbe68f04ad4a] Running
	I0318 22:41:37.985537   14794 system_pods.go:61] "kube-ingress-dns-minikube" [441205ea-d264-4a30-8799-057e7206f14c] Running
	I0318 22:41:37.985540   14794 system_pods.go:61] "kube-proxy-7l52v" [a942fba5-03ba-40d1-9791-6f5c2759bf5b] Running
	I0318 22:41:37.985543   14794 system_pods.go:61] "kube-scheduler-addons-935788" [8dfb8a00-573c-48fa-9a79-0fe035401f8d] Running
	I0318 22:41:37.985548   14794 system_pods.go:61] "metrics-server-69cf46c98-f486p" [1185426d-ada6-4c7a-aeff-3481c9cfb03d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 22:41:37.985552   14794 system_pods.go:61] "nvidia-device-plugin-daemonset-mhwxb" [8c3d0ec7-cc92-4284-af18-057f03d243ae] Running
	I0318 22:41:37.985556   14794 system_pods.go:61] "registry-j77v8" [704b949f-b38b-43ab-a839-0657d4adf742] Running
	I0318 22:41:37.985560   14794 system_pods.go:61] "registry-proxy-qfpvm" [ec232402-c520-4e21-b409-309418205181] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0318 22:41:37.985567   14794 system_pods.go:61] "snapshot-controller-58dbcc7b99-ljhsx" [d927b858-2965-426b-89b3-afdbc57eaa80] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0318 22:41:37.985576   14794 system_pods.go:61] "snapshot-controller-58dbcc7b99-rwl8r" [38f98418-4ae1-4cc1-8fac-69c219ec1805] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0318 22:41:37.985580   14794 system_pods.go:61] "storage-provisioner" [4cd9c58b-ecbc-47cf-a474-cba8cde37f82] Running
	I0318 22:41:37.985584   14794 system_pods.go:61] "tiller-deploy-7b677967b9-plx4j" [10772ad2-52ab-449b-bd96-23b29d59b221] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0318 22:41:37.985593   14794 system_pods.go:74] duration metric: took 182.387682ms to wait for pod list to return data ...
	I0318 22:41:37.985600   14794 default_sa.go:34] waiting for default service account to be created ...
	I0318 22:41:38.144969   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:38.176779   14794 default_sa.go:45] found service account: "default"
	I0318 22:41:38.176798   14794 default_sa.go:55] duration metric: took 191.190106ms for default service account to be created ...
	I0318 22:41:38.176806   14794 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 22:41:38.185052   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:38.185155   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:38.383184   14794 system_pods.go:86] 18 kube-system pods found
	I0318 22:41:38.383210   14794 system_pods.go:89] "coredns-76f75df574-ntqfz" [3b0d2ae0-dfa9-4b9e-8f0e-eb3e0f8ec7c3] Running
	I0318 22:41:38.383218   14794 system_pods.go:89] "csi-hostpath-attacher-0" [36445c0d-c840-42e5-a64b-9070a5d60697] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0318 22:41:38.383224   14794 system_pods.go:89] "csi-hostpath-resizer-0" [e2c623f3-bb6d-41d4-824f-9f063e1ffd18] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0318 22:41:38.383232   14794 system_pods.go:89] "csi-hostpathplugin-c8frv" [bd13d515-c4e1-4d53-b4ae-a386eaf6c0cd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0318 22:41:38.383237   14794 system_pods.go:89] "etcd-addons-935788" [9a18611e-6039-4c4e-b173-0cbdae1d4916] Running
	I0318 22:41:38.383241   14794 system_pods.go:89] "kube-apiserver-addons-935788" [08694eaa-7cb8-4094-b347-627cdbb438fd] Running
	I0318 22:41:38.383246   14794 system_pods.go:89] "kube-controller-manager-addons-935788" [b3ac1b09-5de6-4dcb-932b-bbe68f04ad4a] Running
	I0318 22:41:38.383251   14794 system_pods.go:89] "kube-ingress-dns-minikube" [441205ea-d264-4a30-8799-057e7206f14c] Running
	I0318 22:41:38.383255   14794 system_pods.go:89] "kube-proxy-7l52v" [a942fba5-03ba-40d1-9791-6f5c2759bf5b] Running
	I0318 22:41:38.383258   14794 system_pods.go:89] "kube-scheduler-addons-935788" [8dfb8a00-573c-48fa-9a79-0fe035401f8d] Running
	I0318 22:41:38.383264   14794 system_pods.go:89] "metrics-server-69cf46c98-f486p" [1185426d-ada6-4c7a-aeff-3481c9cfb03d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 22:41:38.383267   14794 system_pods.go:89] "nvidia-device-plugin-daemonset-mhwxb" [8c3d0ec7-cc92-4284-af18-057f03d243ae] Running
	I0318 22:41:38.383272   14794 system_pods.go:89] "registry-j77v8" [704b949f-b38b-43ab-a839-0657d4adf742] Running
	I0318 22:41:38.383276   14794 system_pods.go:89] "registry-proxy-qfpvm" [ec232402-c520-4e21-b409-309418205181] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0318 22:41:38.383282   14794 system_pods.go:89] "snapshot-controller-58dbcc7b99-ljhsx" [d927b858-2965-426b-89b3-afdbc57eaa80] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0318 22:41:38.383287   14794 system_pods.go:89] "snapshot-controller-58dbcc7b99-rwl8r" [38f98418-4ae1-4cc1-8fac-69c219ec1805] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0318 22:41:38.383291   14794 system_pods.go:89] "storage-provisioner" [4cd9c58b-ecbc-47cf-a474-cba8cde37f82] Running
	I0318 22:41:38.383296   14794 system_pods.go:89] "tiller-deploy-7b677967b9-plx4j" [10772ad2-52ab-449b-bd96-23b29d59b221] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0318 22:41:38.383302   14794 system_pods.go:126] duration metric: took 206.490486ms to wait for k8s-apps to be running ...
	I0318 22:41:38.383309   14794 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 22:41:38.383350   14794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:41:38.402315   14794 system_svc.go:56] duration metric: took 19.000642ms WaitForService to wait for kubelet
	I0318 22:41:38.402338   14794 kubeadm.go:576] duration metric: took 42.044215134s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 22:41:38.402354   14794 node_conditions.go:102] verifying NodePressure condition ...
	I0318 22:41:38.436100   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:38.578183   14794 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 22:41:38.578212   14794 node_conditions.go:123] node cpu capacity is 2
	I0318 22:41:38.578230   14794 node_conditions.go:105] duration metric: took 175.870293ms to run NodePressure ...
	I0318 22:41:38.578245   14794 start.go:240] waiting for startup goroutines ...
	I0318 22:41:38.642709   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:38.684623   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:38.685208   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:38.937113   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:39.142840   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:39.183770   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:39.185490   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:39.436204   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:39.644199   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:39.683909   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:39.684510   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:39.939358   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:40.142673   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:40.183573   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:40.184222   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:40.437315   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:40.642460   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:40.685189   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:40.687269   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:40.937878   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:41.142575   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:41.183879   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:41.184444   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:41.468290   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:41.642581   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:41.691015   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:41.693089   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:41.939144   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:42.143638   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:42.187502   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:42.187827   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:42.437888   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:42.644523   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:42.686635   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:42.687093   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:42.937379   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:43.143438   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:43.193455   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:43.195596   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:43.437154   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:43.643331   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:43.684361   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:43.685651   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:43.939374   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:44.143149   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:44.184826   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:44.188968   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:44.437730   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:44.643805   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:44.686527   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:44.689195   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:44.940956   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:45.142574   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:45.185955   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:45.186288   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:45.437062   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:45.644108   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:45.686289   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:45.686419   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:45.937053   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:46.141961   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:46.185902   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:46.186337   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:46.437552   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:46.647632   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:46.686119   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:46.686620   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:46.937911   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:47.142386   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:47.242013   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:47.242599   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:47.437620   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:47.642814   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:47.684898   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:47.685311   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:47.938223   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:48.146334   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:48.188053   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:48.188406   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:48.437209   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:48.651755   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:48.686859   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:48.687218   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:48.938103   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:49.143928   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:49.182913   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:49.185754   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:49.437870   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:49.645351   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:49.684351   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:49.684856   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:49.936843   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:50.150869   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:50.189417   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:50.189641   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:50.437807   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:50.643206   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:50.688108   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:50.689593   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:50.939051   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:51.143497   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:51.185916   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:51.186301   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:51.439062   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:51.643249   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:51.684878   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:51.687667   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:51.937957   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:52.142517   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:52.185078   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:52.185347   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:52.437968   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:52.642844   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:52.686405   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:52.686606   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:52.936849   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:53.149376   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:53.184877   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:53.185245   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:53.437498   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:53.644485   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:53.685571   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:53.685791   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:53.936902   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:54.143947   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:54.184408   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:54.184562   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:54.437438   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:54.642982   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:54.683932   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:54.685399   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:54.937418   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:55.142918   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:55.183871   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:55.185578   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:55.438465   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:55.643551   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:55.686137   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:55.686906   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:55.937193   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:56.142207   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:56.184940   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:56.185784   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:56.437635   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:56.644332   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:56.684025   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:56.685409   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:56.937299   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:57.142821   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:57.184684   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:57.184939   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:57.437707   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:57.642480   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:57.684510   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:57.684671   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:57.936546   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:58.144058   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:58.184403   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:58.186894   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:58.436736   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:58.642904   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:58.684486   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:58.685129   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:58.936951   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:59.142891   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:59.185022   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:59.185126   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:59.437153   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:41:59.643234   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:41:59.684141   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:41:59.684954   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:41:59.936854   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:00.147081   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:00.205728   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:42:00.209093   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:00.437884   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:00.647480   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:00.684855   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:00.686078   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:42:00.937893   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:01.143417   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:01.185033   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:42:01.185251   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:01.437786   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:01.644397   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:01.685748   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:01.686004   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:42:01.942896   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:02.148578   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:02.184144   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:02.184210   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:42:02.438057   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:02.643905   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:02.685639   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:42:02.688177   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:02.937014   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:03.148203   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:03.183281   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:42:03.185386   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:03.437928   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:03.642295   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:03.684315   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:03.684530   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:42:03.937566   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:04.142981   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:04.185021   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:04.185834   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:42:04.437421   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:04.642972   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:04.683647   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:42:04.685414   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:04.937349   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:05.143895   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:05.184199   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:05.185579   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:42:05.437142   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:05.642155   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:05.683262   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:42:05.684980   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:05.936517   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:06.142704   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:06.186026   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:42:06.186378   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:06.437346   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:06.642044   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:06.683426   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:42:06.685745   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:06.936445   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:07.143265   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:07.185083   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:07.185181   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:42:07.618573   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:07.646022   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:07.683481   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:42:07.689958   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:07.937189   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:08.142717   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:08.183675   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:08.185499   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:42:08.437270   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:08.642737   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:08.686323   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:08.688285   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:42:08.938852   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:09.142159   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:09.190328   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:42:09.191009   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:09.438147   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:09.645553   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:09.685825   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:42:09.687709   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:10.179513   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:10.180175   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:10.188376   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:42:10.188451   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:10.438206   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:10.642730   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:10.684598   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:10.684745   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:42:10.936412   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:11.142068   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:11.189509   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:42:11.189765   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:11.437607   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:11.642766   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:11.683874   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:11.684354   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:42:11.937636   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:12.142407   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:12.185364   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:12.186914   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:42:12.436655   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:12.644801   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:12.685952   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:42:12.687994   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:12.937447   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:13.142838   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:13.185329   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:13.186916   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:42:13.437048   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:13.642157   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:13.685051   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:13.686539   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:42:13.936874   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:14.143664   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:14.183836   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:14.184277   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:42:14.437870   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:14.642409   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:14.686694   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:42:14.686974   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:14.936948   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:15.143192   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:15.184599   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:42:15.186356   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:15.438417   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:15.643793   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:15.686131   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:42:15.686508   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:15.937790   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:16.142720   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:16.183855   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 22:42:16.184109   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:16.437259   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:16.642880   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:16.685538   14794 kapi.go:107] duration metric: took 1m11.006892969s to wait for kubernetes.io/minikube-addons=registry ...
	I0318 22:42:16.685772   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:16.937058   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:17.143159   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:17.184335   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:17.437292   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:17.644282   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:17.684460   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:17.937369   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:18.142792   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:18.183603   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:18.438194   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:18.643935   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:18.683937   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:18.936899   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:19.144289   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:19.192415   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:19.437896   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:19.642954   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:19.683743   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:19.937261   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:20.142874   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:20.184245   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:20.439526   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:20.643014   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:20.684696   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:20.944171   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:21.143424   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:21.184536   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:21.437233   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:21.642491   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:21.692066   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:21.939314   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:22.144607   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:22.185246   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:22.436863   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:22.643014   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:22.685244   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:22.942364   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:23.142935   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:23.183679   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:23.454650   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:23.643185   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:23.685232   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:23.944518   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:24.146552   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:24.438038   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:24.439206   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:24.643870   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:24.683981   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:24.937431   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:25.153590   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:25.183998   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:25.437770   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:25.642798   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:25.683806   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:25.937999   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:26.141810   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:26.186550   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:26.438082   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:26.643376   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:26.685598   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:26.937361   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:27.143447   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:27.188628   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:27.438638   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:27.642929   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:27.683810   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:27.937709   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:28.142975   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:28.183316   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:28.436994   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:28.643616   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:28.699151   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:28.937657   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:29.226180   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:29.226727   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:29.436696   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:29.643242   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:29.685583   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:29.938908   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:30.161681   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:30.209378   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:30.437066   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:30.648826   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:30.688264   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:30.939020   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:31.151367   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:31.185336   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:31.438066   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:31.649995   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:31.684768   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:31.948866   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:32.142222   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:32.184293   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:32.439764   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:32.644816   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:32.684559   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:32.938190   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:33.143803   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:33.183852   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:33.438562   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:33.642835   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:33.684558   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:33.937764   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:34.144575   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:34.184476   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:34.439428   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:34.643067   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:34.684114   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:34.937253   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:35.142446   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:35.183738   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:35.438140   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:35.643501   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:35.683861   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:35.936514   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:36.143669   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:36.183829   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:36.437363   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:36.642859   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:36.684862   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:36.936621   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:37.141627   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 22:42:37.184036   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:37.437174   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:37.642928   14794 kapi.go:107] duration metric: took 1m31.006477298s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0318 22:42:37.684039   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:37.937093   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:38.183861   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:38.437535   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:38.684721   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:38.937626   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:39.185021   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:39.437554   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:39.685028   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:39.936315   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:40.185109   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:40.437209   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:40.684276   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:40.937679   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:41.184295   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:41.437402   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:41.685217   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:41.938526   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:42.184692   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:42.438147   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:42.684666   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:42.937840   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:43.184559   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:43.438452   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:43.687390   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:43.937397   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:44.184531   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:44.437554   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:44.685756   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:44.937164   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:45.184672   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:45.439163   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:45.685083   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:45.939512   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:46.184249   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:46.437622   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:46.684617   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:46.938002   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:47.184479   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:47.437532   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:47.686114   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:47.937378   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:48.185161   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:48.436897   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:48.683883   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:48.937993   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:49.188258   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:49.437350   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:49.684306   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:49.937220   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:50.184964   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:50.437154   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:50.684693   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:50.939109   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:51.184643   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:51.438785   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:51.685893   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:51.937207   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:52.185741   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:52.437905   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:52.684681   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:52.938082   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:53.184830   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:53.439949   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:53.684589   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:53.937388   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:54.185079   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:54.436966   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:54.684280   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:54.937325   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:55.186598   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:55.438787   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:55.683928   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:55.936958   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:56.184553   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:56.437720   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:56.685626   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:56.937532   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:57.185106   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:57.437594   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:57.684961   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:57.936905   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:58.184610   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:58.438080   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:58.686207   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:58.939776   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:59.184163   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:59.437405   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:42:59.685299   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:42:59.939294   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:00.184314   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:00.437890   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:00.683970   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:00.936940   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:01.184927   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:01.437554   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:01.685268   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:01.937349   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:02.185377   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:02.437575   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:02.685381   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:02.938173   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:03.185069   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:03.437012   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:03.683968   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:03.936986   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:04.184594   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:04.439147   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:04.684937   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:04.937564   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:05.185937   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:05.438607   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:05.686068   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:05.937115   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:06.187970   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:06.439244   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:06.684099   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:06.939153   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:07.184630   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:07.438567   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:07.686813   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:07.937449   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:08.185623   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:08.440612   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:08.686810   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:08.939313   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:09.185264   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:09.440866   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:09.684466   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:09.937572   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:10.184890   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:10.437178   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:10.684854   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:10.939258   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:11.184375   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:11.438082   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:11.686759   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:11.938145   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:12.184744   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:12.441277   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:12.684620   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:12.937731   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:13.186762   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:13.436716   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:13.686407   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:13.937336   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:14.185830   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:14.436666   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:14.685918   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:14.937644   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:15.185442   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:15.437800   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:15.685275   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:15.937106   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:16.185915   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:16.437088   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:16.686389   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:16.937005   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:17.184304   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:17.437792   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:17.684544   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:17.938857   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:18.184068   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:18.438706   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:18.684129   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:18.937257   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:19.184806   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:19.438300   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:19.684590   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:19.937332   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:20.185413   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:20.437544   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:20.687864   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:20.937527   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:21.185620   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:21.437445   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:21.685257   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:21.937124   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:22.184714   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:22.437985   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:22.684526   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:22.937865   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:23.184307   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:23.438425   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:23.684597   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:23.938731   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:24.190630   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:24.440807   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:24.685600   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:24.938163   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:25.186473   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:25.440460   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:25.685045   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:25.938455   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:26.186580   14794 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 22:43:26.437066   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:26.695913   14794 kapi.go:107] duration metric: took 2m21.016255449s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0318 22:43:26.939339   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:27.438439   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:27.939457   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:28.437575   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:28.936945   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:29.437866   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:29.938948   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:30.437992   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:30.937697   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:31.436810   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:31.938080   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:32.438561   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:32.941592   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:33.437510   14794 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 22:43:33.936900   14794 kapi.go:107] duration metric: took 2m25.503562046s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0318 22:43:33.938696   14794 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-935788 cluster.
	I0318 22:43:33.940237   14794 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0318 22:43:33.941462   14794 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0318 22:43:33.942860   14794 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, storage-provisioner, yakd, nvidia-device-plugin, helm-tiller, inspektor-gadget, metrics-server, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0318 22:43:33.944044   14794 addons.go:505] duration metric: took 2m37.585897793s for enable addons: enabled=[cloud-spanner ingress-dns storage-provisioner yakd nvidia-device-plugin helm-tiller inspektor-gadget metrics-server storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0318 22:43:33.944077   14794 start.go:245] waiting for cluster config update ...
	I0318 22:43:33.944093   14794 start.go:254] writing updated cluster config ...
	I0318 22:43:33.944326   14794 ssh_runner.go:195] Run: rm -f paused
	I0318 22:43:33.993239   14794 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0318 22:43:33.995022   14794 out.go:177] * Done! kubectl is now configured to use "addons-935788" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	f67eeed9f0350       a416a98b71e22       3 seconds ago        Exited              helper-pod                               0                   1e10b970e3064       helper-pod-delete-pvc-dc5ffa50-03c1-493c-ae58-8e564d4e9229
	043456b0c0b51       ba5dc23f65d4c       6 seconds ago        Exited              busybox                                  0                   b97516cb45663       test-local-path
	3541e995bbcfa       db2fc13d44d50       28 seconds ago       Running             gcp-auth                                 0                   28f2cb22d37cf       gcp-auth-7d69788767-l8p9g
	1aaacc103ceed       ffcc66479b5ba       34 seconds ago       Running             controller                               0                   817b89adfc07a       ingress-nginx-controller-65496f9567-9wxqv
	5e338e9119382       738351fd438f0       About a minute ago   Running             csi-snapshotter                          0                   b16207c642493       csi-hostpathplugin-c8frv
	6b3faba73d4f9       931dbfd16f87c       About a minute ago   Running             csi-provisioner                          0                   b16207c642493       csi-hostpathplugin-c8frv
	f0f37e4827a8c       e899260153aed       About a minute ago   Running             liveness-probe                           0                   b16207c642493       csi-hostpathplugin-c8frv
	2ed22460502ee       e255e073c508c       About a minute ago   Running             hostpath                                 0                   b16207c642493       csi-hostpathplugin-c8frv
	d57b4008b0d23       88ef14a257f42       About a minute ago   Running             node-driver-registrar                    0                   b16207c642493       csi-hostpathplugin-c8frv
	755d6141f48ef       19a639eda60f0       About a minute ago   Running             csi-resizer                              0                   e55b326ed0ef7       csi-hostpath-resizer-0
	5b5fa7b47701a       59cbb42146a37       About a minute ago   Running             csi-attacher                             0                   9f158d1cc1aca       csi-hostpath-attacher-0
	b9959acfcea85       a1ed5895ba635       About a minute ago   Running             csi-external-health-monitor-controller   0                   b16207c642493       csi-hostpathplugin-c8frv
	30ed3a42fdeaa       b29d748098e32       About a minute ago   Exited              patch                                    0                   aaa1cf2500318       ingress-nginx-admission-patch-g2wp4
	7050f9968f88c       b29d748098e32       About a minute ago   Exited              create                                   0                   68b734951d340       ingress-nginx-admission-create-cmfc7
	12d626e074ade       aa61ee9c70bc4       About a minute ago   Running             volume-snapshot-controller               0                   e7c965fb01df5       snapshot-controller-58dbcc7b99-rwl8r
	7672cbdbc4460       aa61ee9c70bc4       About a minute ago   Running             volume-snapshot-controller               0                   f4023c2bccead       snapshot-controller-58dbcc7b99-ljhsx
	bfb03adb0ce8b       e16d1e3a10667       About a minute ago   Running             local-path-provisioner                   0                   30e6138ca6500       local-path-provisioner-78b46b4d5c-rzx8d
	fdcd84f4c2818       d2fd211e7dcaa       About a minute ago   Running             registry-proxy                           0                   52cacb028d0ed       registry-proxy-qfpvm
	b8037774af38a       b9a5a1927366a       2 minutes ago        Running             metrics-server                           0                   4c305242ce84c       metrics-server-69cf46c98-f486p
	a8b545845b352       31de47c733c91       2 minutes ago        Running             yakd                                     0                   a3c5e23a5a158       yakd-dashboard-9947fc6bf-gs5mh
	14a3183910aac       9363667f8aecb       2 minutes ago        Running             registry                                 0                   11007c58780fe       registry-j77v8
	5b2b2c0f6b79f       1499ed4fbd0aa       2 minutes ago        Running             minikube-ingress-dns                     0                   f24c2e8779c69       kube-ingress-dns-minikube
	20b598b392e63       1a9bd6f561b5c       2 minutes ago        Running             cloud-spanner-emulator                   0                   c9a0ad4c41e57       cloud-spanner-emulator-5446596998-vntjl
	b2b06719c653d       f6df8d4b582f4       2 minutes ago        Running             nvidia-device-plugin-ctr                 0                   620455e7f454f       nvidia-device-plugin-daemonset-mhwxb
	3fa9f07cb115a       6e38f40d628db       2 minutes ago        Running             storage-provisioner                      0                   262a97153e093       storage-provisioner
	6563759498e71       cbb01a7bd410d       3 minutes ago        Running             coredns                                  0                   74fd072b7a3e4       coredns-76f75df574-ntqfz
	6670187500091       a1d263b5dc5b0       3 minutes ago        Running             kube-proxy                               0                   8d36ac8b8c49a       kube-proxy-7l52v
	ff8846e9d0768       8c390d98f50c0       3 minutes ago        Running             kube-scheduler                           0                   126a54cd99c98       kube-scheduler-addons-935788
	907d446672f34       3861cfcd7c04c       3 minutes ago        Running             etcd                                     0                   3e7799788666b       etcd-addons-935788
	851fe0b66ee61       6052a25da3f97       3 minutes ago        Running             kube-controller-manager                  0                   2798d7dcd85fe       kube-controller-manager-addons-935788
	868a9521feab9       39f995c9f1996       3 minutes ago        Running             kube-apiserver                           0                   02a52dd5a652b       kube-apiserver-addons-935788
	
	
	==> containerd <==
	Mar 18 22:43:58 addons-935788 containerd[650]: time="2024-03-18T22:43:58.961381377Z" level=warning msg="cleaning up after shim disconnected" id=1e10b970e3064d2a8d2552d1c744bcc66831fde1c18cd22eabde96f91bbc84ce namespace=k8s.io
	Mar 18 22:43:58 addons-935788 containerd[650]: time="2024-03-18T22:43:58.961443186Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Mar 18 22:43:59 addons-935788 containerd[650]: time="2024-03-18T22:43:59.103428807Z" level=info msg="TearDown network for sandbox \"1e10b970e3064d2a8d2552d1c744bcc66831fde1c18cd22eabde96f91bbc84ce\" successfully"
	Mar 18 22:43:59 addons-935788 containerd[650]: time="2024-03-18T22:43:59.103507972Z" level=info msg="StopPodSandbox for \"1e10b970e3064d2a8d2552d1c744bcc66831fde1c18cd22eabde96f91bbc84ce\" returns successfully"
	Mar 18 22:43:59 addons-935788 containerd[650]: time="2024-03-18T22:43:59.112244868Z" level=info msg="TearDown network for sandbox \"187c96cafcf04f16df6485108d4d018cdafb3051434f57aac7a99fe45ca9987a\" successfully"
	Mar 18 22:43:59 addons-935788 containerd[650]: time="2024-03-18T22:43:59.112314485Z" level=info msg="StopPodSandbox for \"187c96cafcf04f16df6485108d4d018cdafb3051434f57aac7a99fe45ca9987a\" returns successfully"
	Mar 18 22:43:59 addons-935788 containerd[650]: time="2024-03-18T22:43:59.631418988Z" level=info msg="StopPodSandbox for \"e6012745972725f2d9ccb746246e50ebed7cdf2f83d2fa947c0438ebac0e441f\""
	Mar 18 22:43:59 addons-935788 containerd[650]: time="2024-03-18T22:43:59.631514099Z" level=info msg="Container to stop \"0b165fc5a15c678cdfd78e9d53b6435668f88971c24b1ac08a13b25216548157\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Mar 18 22:43:59 addons-935788 containerd[650]: time="2024-03-18T22:43:59.746356139Z" level=info msg="shim disconnected" id=e6012745972725f2d9ccb746246e50ebed7cdf2f83d2fa947c0438ebac0e441f namespace=k8s.io
	Mar 18 22:43:59 addons-935788 containerd[650]: time="2024-03-18T22:43:59.746426383Z" level=warning msg="cleaning up after shim disconnected" id=e6012745972725f2d9ccb746246e50ebed7cdf2f83d2fa947c0438ebac0e441f namespace=k8s.io
	Mar 18 22:43:59 addons-935788 containerd[650]: time="2024-03-18T22:43:59.746438411Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Mar 18 22:43:59 addons-935788 containerd[650]: time="2024-03-18T22:43:59.774744165Z" level=info msg="TearDown network for sandbox \"e6012745972725f2d9ccb746246e50ebed7cdf2f83d2fa947c0438ebac0e441f\" successfully"
	Mar 18 22:43:59 addons-935788 containerd[650]: time="2024-03-18T22:43:59.774801927Z" level=info msg="StopPodSandbox for \"e6012745972725f2d9ccb746246e50ebed7cdf2f83d2fa947c0438ebac0e441f\" returns successfully"
	Mar 18 22:43:59 addons-935788 containerd[650]: time="2024-03-18T22:43:59.876239514Z" level=info msg="RemoveContainer for \"e7f391dc93997527c35f8a4586c86b2a73b6be611e34adeba89bbb58453aa1ba\""
	Mar 18 22:43:59 addons-935788 containerd[650]: time="2024-03-18T22:43:59.890787140Z" level=info msg="RemoveContainer for \"e7f391dc93997527c35f8a4586c86b2a73b6be611e34adeba89bbb58453aa1ba\" returns successfully"
	Mar 18 22:43:59 addons-935788 containerd[650]: time="2024-03-18T22:43:59.897007678Z" level=info msg="RemoveContainer for \"0b165fc5a15c678cdfd78e9d53b6435668f88971c24b1ac08a13b25216548157\""
	Mar 18 22:43:59 addons-935788 containerd[650]: time="2024-03-18T22:43:59.914832332Z" level=info msg="RemoveContainer for \"0b165fc5a15c678cdfd78e9d53b6435668f88971c24b1ac08a13b25216548157\" returns successfully"
	Mar 18 22:43:59 addons-935788 containerd[650]: time="2024-03-18T22:43:59.986280944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:task-pv-pod-restore,Uid:15913949-3374-4364-8818-e5dda6399f18,Namespace:default,Attempt:0,}"
	Mar 18 22:44:00 addons-935788 containerd[650]: time="2024-03-18T22:44:00.093286548Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 22:44:00 addons-935788 containerd[650]: time="2024-03-18T22:44:00.097433093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 22:44:00 addons-935788 containerd[650]: time="2024-03-18T22:44:00.097609411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 22:44:00 addons-935788 containerd[650]: time="2024-03-18T22:44:00.097951331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 22:44:00 addons-935788 containerd[650]: time="2024-03-18T22:44:00.202436928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:task-pv-pod-restore,Uid:15913949-3374-4364-8818-e5dda6399f18,Namespace:default,Attempt:0,} returns sandbox id \"6bb1317638124dfe69e822ee4e641205821a8937f321267ea964487d2996956f\""
	Mar 18 22:44:00 addons-935788 containerd[650]: time="2024-03-18T22:44:00.208527773Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Mar 18 22:44:00 addons-935788 containerd[650]: time="2024-03-18T22:44:00.237828351Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	
	
	==> coredns [6563759498e7169a18ed9e35faab804ba6aa738c954ff2d1ff833213c54179d3] <==
	[INFO] 10.244.0.8:45507 - 62976 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000488969s
	[INFO] 10.244.0.8:34369 - 27291 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00017344s
	[INFO] 10.244.0.8:34369 - 46488 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000160609s
	[INFO] 10.244.0.8:57339 - 24390 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000123474s
	[INFO] 10.244.0.8:57339 - 50760 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00007548s
	[INFO] 10.244.0.8:37082 - 36032 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000148823s
	[INFO] 10.244.0.8:37082 - 42950 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000269664s
	[INFO] 10.244.0.8:59733 - 64381 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000048768s
	[INFO] 10.244.0.8:59733 - 17531 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000033036s
	[INFO] 10.244.0.8:36802 - 29920 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000113049s
	[INFO] 10.244.0.8:36802 - 40957 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000027982s
	[INFO] 10.244.0.8:57489 - 64957 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00003325s
	[INFO] 10.244.0.8:57489 - 54195 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000027738s
	[INFO] 10.244.0.8:50684 - 63936 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000023463s
	[INFO] 10.244.0.8:50684 - 20431 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000021241s
	[INFO] 10.244.0.22:49769 - 40878 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001353904s
	[INFO] 10.244.0.22:56229 - 13700 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002181831s
	[INFO] 10.244.0.22:59273 - 7370 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000130623s
	[INFO] 10.244.0.22:56813 - 39778 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000822176s
	[INFO] 10.244.0.22:54251 - 2896 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000174268s
	[INFO] 10.244.0.22:47168 - 4280 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.001018545s
	[INFO] 10.244.0.22:45875 - 36834 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00068769s
	[INFO] 10.244.0.22:33711 - 58904 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 230 0.002111501s
	[INFO] 10.244.0.27:44565 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000234865s
	[INFO] 10.244.0.27:60762 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000142173s
	
	
	==> describe nodes <==
	Name:               addons-935788
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-935788
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a199844351973d00eb5dd1cc0bf4d2238e461f04
	                    minikube.k8s.io/name=addons-935788
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T22_40_43_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-935788
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-935788"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 22:40:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-935788
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 22:43:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 22:43:47 +0000   Mon, 18 Mar 2024 22:40:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 22:43:47 +0000   Mon, 18 Mar 2024 22:40:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 22:43:47 +0000   Mon, 18 Mar 2024 22:40:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 22:43:47 +0000   Mon, 18 Mar 2024 22:40:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.13
	  Hostname:    addons-935788
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 73cfbc7604ff4577b34b625ab6086ff2
	  System UUID:                73cfbc76-04ff-4577-b34b-625ab6086ff2
	  Boot ID:                    c483b554-9e90-479b-8f3f-0e259d9bdf9c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.14
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (23 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5446596998-vntjl      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m1s
	  default                     task-pv-pod-restore                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	  gcp-auth                    gcp-auth-7d69788767-l8p9g                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m53s
	  ingress-nginx               ingress-nginx-controller-65496f9567-9wxqv    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         2m56s
	  kube-system                 coredns-76f75df574-ntqfz                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m5s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m55s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m55s
	  kube-system                 csi-hostpathplugin-c8frv                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m55s
	  kube-system                 etcd-addons-935788                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3m18s
	  kube-system                 kube-apiserver-addons-935788                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m18s
	  kube-system                 kube-controller-manager-addons-935788        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m18s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m
	  kube-system                 kube-proxy-7l52v                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m5s
	  kube-system                 kube-scheduler-addons-935788                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m18s
	  kube-system                 metrics-server-69cf46c98-f486p               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m58s
	  kube-system                 nvidia-device-plugin-daemonset-mhwxb         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m2s
	  kube-system                 registry-j77v8                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m
	  kube-system                 registry-proxy-qfpvm                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m
	  kube-system                 snapshot-controller-58dbcc7b99-ljhsx         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m56s
	  kube-system                 snapshot-controller-58dbcc7b99-rwl8r         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m56s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m59s
	  local-path-storage          local-path-provisioner-78b46b4d5c-rzx8d      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m59s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-gs5mh               0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     2m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             588Mi (15%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m3s   kube-proxy       
	  Normal  Starting                 3m18s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m18s  kubelet          Node addons-935788 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m18s  kubelet          Node addons-935788 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m18s  kubelet          Node addons-935788 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m18s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m18s  kubelet          Node addons-935788 status is now: NodeReady
	  Normal  RegisteredNode           3m6s   node-controller  Node addons-935788 event: Registered Node addons-935788 in Controller
	
	
	==> dmesg <==
	[  +5.342421] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.055787] kauditd_printk_skb: 158 callbacks suppressed
	[  +0.604786] systemd-fstab-generator[691]: Ignoring "noauto" option for root device
	[  +4.539816] systemd-fstab-generator[858]: Ignoring "noauto" option for root device
	[  +0.061016] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.706751] systemd-fstab-generator[1228]: Ignoring "noauto" option for root device
	[  +0.086658] kauditd_printk_skb: 69 callbacks suppressed
	[ +13.267912] systemd-fstab-generator[1429]: Ignoring "noauto" option for root device
	[  +0.182594] kauditd_printk_skb: 21 callbacks suppressed
	[Mar18 22:41] kauditd_printk_skb: 104 callbacks suppressed
	[  +5.244039] kauditd_printk_skb: 133 callbacks suppressed
	[  +6.306130] kauditd_printk_skb: 52 callbacks suppressed
	[ +26.608996] kauditd_printk_skb: 6 callbacks suppressed
	[Mar18 22:42] kauditd_printk_skb: 6 callbacks suppressed
	[ +12.151476] kauditd_printk_skb: 24 callbacks suppressed
	[ +11.306425] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.994055] kauditd_printk_skb: 69 callbacks suppressed
	[ +15.724122] kauditd_printk_skb: 12 callbacks suppressed
	[Mar18 22:43] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.733415] kauditd_printk_skb: 42 callbacks suppressed
	[  +6.799542] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.193962] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.633375] kauditd_printk_skb: 35 callbacks suppressed
	[  +5.394808] kauditd_printk_skb: 5 callbacks suppressed
	[  +6.126021] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [907d446672f347746925f615baed539274a7dda031174df1afd39a4e98e93fe0] <==
	{"level":"info","ts":"2024-03-18T22:41:00.041773Z","caller":"traceutil/trace.go:171","msg":"trace[1868450386] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:396; }","duration":"217.15354ms","start":"2024-03-18T22:40:59.824607Z","end":"2024-03-18T22:41:00.04176Z","steps":["trace[1868450386] 'agreement among raft nodes before linearized reading'  (duration: 211.231619ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T22:41:05.002014Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.482477ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/snapshot-controller\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-18T22:41:05.002152Z","caller":"traceutil/trace.go:171","msg":"trace[594140232] range","detail":"{range_begin:/registry/deployments/kube-system/snapshot-controller; range_end:; response_count:0; response_revision:646; }","duration":"169.666031ms","start":"2024-03-18T22:41:04.832467Z","end":"2024-03-18T22:41:05.002133Z","steps":["trace[594140232] 'range keys from in-memory index tree'  (duration: 169.439347ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T22:41:05.002607Z","caller":"traceutil/trace.go:171","msg":"trace[66227497] transaction","detail":"{read_only:false; response_revision:647; number_of_response:1; }","duration":"169.965736ms","start":"2024-03-18T22:41:04.83263Z","end":"2024-03-18T22:41:05.002596Z","steps":["trace[66227497] 'process raft request'  (duration: 167.547287ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T22:41:05.005252Z","caller":"traceutil/trace.go:171","msg":"trace[1401857873] transaction","detail":"{read_only:false; response_revision:648; number_of_response:1; }","duration":"165.451559ms","start":"2024-03-18T22:41:04.839793Z","end":"2024-03-18T22:41:05.005244Z","steps":["trace[1401857873] 'process raft request'  (duration: 165.390317ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T22:41:05.00549Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.065499ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:729"}
	{"level":"info","ts":"2024-03-18T22:41:05.005547Z","caller":"traceutil/trace.go:171","msg":"trace[103555511] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:648; }","duration":"100.145044ms","start":"2024-03-18T22:41:04.905393Z","end":"2024-03-18T22:41:05.005538Z","steps":["trace[103555511] 'agreement among raft nodes before linearized reading'  (duration: 99.998389ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T22:41:12.648604Z","caller":"traceutil/trace.go:171","msg":"trace[231315311] linearizableReadLoop","detail":"{readStateIndex:878; appliedIndex:877; }","duration":"221.093273ms","start":"2024-03-18T22:41:12.427497Z","end":"2024-03-18T22:41:12.64859Z","steps":["trace[231315311] 'read index received'  (duration: 220.963387ms)","trace[231315311] 'applied index is now lower than readState.Index'  (duration: 129.099µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-18T22:41:12.648679Z","caller":"traceutil/trace.go:171","msg":"trace[1276093705] transaction","detail":"{read_only:false; response_revision:858; number_of_response:1; }","duration":"444.308497ms","start":"2024-03-18T22:41:12.204364Z","end":"2024-03-18T22:41:12.648672Z","steps":["trace[1276093705] 'process raft request'  (duration: 444.100702ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T22:41:12.649022Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.343691ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-76f75df574-ntqfz\" ","response":"range_response_count:1 size:4952"}
	{"level":"info","ts":"2024-03-18T22:41:12.649318Z","caller":"traceutil/trace.go:171","msg":"trace[1847853695] range","detail":"{range_begin:/registry/pods/kube-system/coredns-76f75df574-ntqfz; range_end:; response_count:1; response_revision:858; }","duration":"182.544607ms","start":"2024-03-18T22:41:12.466643Z","end":"2024-03-18T22:41:12.649187Z","steps":["trace[1847853695] 'agreement among raft nodes before linearized reading'  (duration: 182.246758ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T22:41:12.649518Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"222.017324ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11161"}
	{"level":"info","ts":"2024-03-18T22:41:12.649571Z","caller":"traceutil/trace.go:171","msg":"trace[1454282423] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:858; }","duration":"222.089046ms","start":"2024-03-18T22:41:12.427474Z","end":"2024-03-18T22:41:12.649564Z","steps":["trace[1454282423] 'agreement among raft nodes before linearized reading'  (duration: 221.971843ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T22:41:12.649357Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T22:41:12.204352Z","time spent":"444.341888ms","remote":"127.0.0.1:36940","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":769,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/gcp-auth/gcp-auth-7d69788767-l8p9g.17bdfd0dae623c65\" mod_revision:854 > success:<request_put:<key:\"/registry/events/gcp-auth/gcp-auth-7d69788767-l8p9g.17bdfd0dae623c65\" value_size:683 lease:5462178413312897606 >> failure:<request_range:<key:\"/registry/events/gcp-auth/gcp-auth-7d69788767-l8p9g.17bdfd0dae623c65\" > >"}
	{"level":"info","ts":"2024-03-18T22:42:07.603211Z","caller":"traceutil/trace.go:171","msg":"trace[688913686] linearizableReadLoop","detail":"{readStateIndex:1014; appliedIndex:1013; }","duration":"179.690552ms","start":"2024-03-18T22:42:07.423494Z","end":"2024-03-18T22:42:07.603184Z","steps":["trace[688913686] 'read index received'  (duration: 171.323056ms)","trace[688913686] 'applied index is now lower than readState.Index'  (duration: 8.366677ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-18T22:42:07.603386Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"179.866563ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11161"}
	{"level":"info","ts":"2024-03-18T22:42:07.603409Z","caller":"traceutil/trace.go:171","msg":"trace[1522170638] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:982; }","duration":"179.932553ms","start":"2024-03-18T22:42:07.423471Z","end":"2024-03-18T22:42:07.603403Z","steps":["trace[1522170638] 'agreement among raft nodes before linearized reading'  (duration: 179.816986ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T22:42:10.156321Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.960249ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11161"}
	{"level":"info","ts":"2024-03-18T22:42:10.156404Z","caller":"traceutil/trace.go:171","msg":"trace[719142506] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:990; }","duration":"233.073711ms","start":"2024-03-18T22:42:09.923317Z","end":"2024-03-18T22:42:10.15639Z","steps":["trace[719142506] 'range keys from in-memory index tree'  (duration: 232.852533ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T22:42:24.420753Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.19616ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-create-cmfc7\" ","response":"range_response_count:1 size:4116"}
	{"level":"warn","ts":"2024-03-18T22:42:24.420886Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"251.773998ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14071"}
	{"level":"info","ts":"2024-03-18T22:42:24.420928Z","caller":"traceutil/trace.go:171","msg":"trace[1747088740] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1043; }","duration":"251.823325ms","start":"2024-03-18T22:42:24.169094Z","end":"2024-03-18T22:42:24.420917Z","steps":["trace[1747088740] 'range keys from in-memory index tree'  (duration: 251.602744ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T22:42:24.420834Z","caller":"traceutil/trace.go:171","msg":"trace[984804940] range","detail":"{range_begin:/registry/pods/ingress-nginx/ingress-nginx-admission-create-cmfc7; range_end:; response_count:1; response_revision:1043; }","duration":"189.314163ms","start":"2024-03-18T22:42:24.231497Z","end":"2024-03-18T22:42:24.420812Z","steps":["trace[984804940] 'range keys from in-memory index tree'  (duration: 189.055909ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T22:42:29.204029Z","caller":"traceutil/trace.go:171","msg":"trace[957697834] transaction","detail":"{read_only:false; response_revision:1083; number_of_response:1; }","duration":"192.176807ms","start":"2024-03-18T22:42:29.011825Z","end":"2024-03-18T22:42:29.204002Z","steps":["trace[957697834] 'process raft request'  (duration: 191.20741ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T22:43:30.271685Z","caller":"traceutil/trace.go:171","msg":"trace[755992197] transaction","detail":"{read_only:false; response_revision:1255; number_of_response:1; }","duration":"116.285493ms","start":"2024-03-18T22:43:30.155373Z","end":"2024-03-18T22:43:30.271658Z","steps":["trace[755992197] 'process raft request'  (duration: 116.195767ms)"],"step_count":1}
	
	
	==> gcp-auth [3541e995bbcfa92cb5a074395e0e118b54cba3e3dcec51cc62f87ce03230d9a1] <==
	2024/03/18 22:43:32 GCP Auth Webhook started!
	2024/03/18 22:43:34 Ready to marshal response ...
	2024/03/18 22:43:34 Ready to write response ...
	2024/03/18 22:43:34 Ready to marshal response ...
	2024/03/18 22:43:34 Ready to write response ...
	2024/03/18 22:43:37 Ready to marshal response ...
	2024/03/18 22:43:37 Ready to write response ...
	2024/03/18 22:43:39 Ready to marshal response ...
	2024/03/18 22:43:39 Ready to write response ...
	2024/03/18 22:43:45 Ready to marshal response ...
	2024/03/18 22:43:45 Ready to write response ...
	2024/03/18 22:43:56 Ready to marshal response ...
	2024/03/18 22:43:56 Ready to write response ...
	2024/03/18 22:43:59 Ready to marshal response ...
	2024/03/18 22:43:59 Ready to write response ...
	
	
	==> kernel <==
	 22:44:01 up 3 min,  0 users,  load average: 1.36, 0.98, 0.42
	Linux addons-935788 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [868a9521feab9433b650ec5d71e54ef73195056a0531e8612d77036a7d021805] <==
	I0318 22:41:04.668878       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0318 22:41:04.668911       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0318 22:41:04.775953       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0318 22:41:04.775994       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0318 22:41:05.338752       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller" clusterIPs={"IPv4":"10.103.231.224"}
	I0318 22:41:05.375852       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller-admission" clusterIPs={"IPv4":"10.110.29.19"}
	I0318 22:41:05.459791       1 controller.go:624] quota admission added evaluator for: jobs.batch
	I0318 22:41:06.188809       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.105.184.87"}
	I0318 22:41:06.219306       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I0318 22:41:06.422401       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.104.161.242"}
	I0318 22:41:08.130605       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.110.6.153"}
	E0318 22:41:50.133720       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.246.76:443/apis/metrics.k8s.io/v1beta1: Get "https://10.111.246.76:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.111.246.76:443: connect: connection refused
	W0318 22:41:50.134679       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 22:41:50.134934       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0318 22:41:50.135968       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.246.76:443/apis/metrics.k8s.io/v1beta1: Get "https://10.111.246.76:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.111.246.76:443: connect: connection refused
	E0318 22:41:50.141601       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.246.76:443/apis/metrics.k8s.io/v1beta1: Get "https://10.111.246.76:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.111.246.76:443: connect: connection refused
	I0318 22:41:50.213373       1 handler.go:275] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0318 22:42:09.555370       1 writers.go:116] apiserver was unable to close cleanly the response writer: write tcp 192.168.39.13:8443->192.168.39.13:18513: write: connection reset by peer
	E0318 22:42:09.555688       1 timeout.go:142] post-timeout activity - time-elapsed: 6.188µs, GET "/api/v1/pods" result: <nil>
	E0318 22:42:09.555924       1 wrap.go:54] timeout or abort while handling: method=GET URI="/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)addons-935788&limit=500&resourceVersion=0" audit-ID="e2ee8a35-2e20-4497-a8d7-e99bc319d1c7"
	E0318 22:43:51.173654       1 upgradeaware.go:425] Error proxying data from client to backend: read tcp 192.168.39.13:8443->10.244.0.25:47996: read: connection reset by peer
	I0318 22:43:52.401778       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0318 22:43:59.577131       1 handler.go:275] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0318 22:44:00.617740       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	
	==> kube-controller-manager [851fe0b66ee614b81d0234b994f66f222ee11e7ab31b2f1743d257c721834003] <==
	I0318 22:42:31.967885       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0318 22:42:31.978635       1 event.go:376] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0318 22:42:31.978769       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0318 22:42:37.270252       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="9.511404ms"
	I0318 22:42:37.270360       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="50.486µs"
	I0318 22:43:01.024510       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0318 22:43:01.026627       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0318 22:43:01.077864       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0318 22:43:01.080265       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0318 22:43:26.660004       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-65496f9567" duration="96.051µs"
	I0318 22:43:33.702004       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-7d69788767" duration="10.851473ms"
	I0318 22:43:33.703362       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-7d69788767" duration="108.096µs"
	I0318 22:43:34.154195       1 event.go:376] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I0318 22:43:34.179677       1 event.go:376] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0318 22:43:34.320629       1 event.go:376] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0318 22:43:36.765896       1 event.go:376] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0318 22:43:40.451181       1 event.go:376] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0318 22:43:40.477592       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-65496f9567" duration="15.047873ms"
	I0318 22:43:40.477701       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-65496f9567" duration="50.006µs"
	I0318 22:43:53.712496       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/tiller-deploy-7b677967b9" duration="6.035µs"
	I0318 22:43:55.169533       1 event.go:376] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0318 22:43:55.451820       1 event.go:376] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0318 22:43:57.585754       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-78b46b4d5c" duration="3.868µs"
	I0318 22:43:58.932453       1 event.go:376] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	E0318 22:44:00.619971       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [66701875000917f3946ecc9cda1f877024e5374f4bd098d1ced23981a63a2d34] <==
	I0318 22:40:57.252333       1 server_others.go:72] "Using iptables proxy"
	I0318 22:40:57.267185       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.13"]
	I0318 22:40:57.370463       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0318 22:40:57.370506       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 22:40:57.370523       1 server_others.go:168] "Using iptables Proxier"
	I0318 22:40:57.381925       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 22:40:57.382291       1 server.go:865] "Version info" version="v1.29.3"
	I0318 22:40:57.382328       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 22:40:57.389293       1 config.go:188] "Starting service config controller"
	I0318 22:40:57.389328       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 22:40:57.389344       1 config.go:97] "Starting endpoint slice config controller"
	I0318 22:40:57.389348       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 22:40:57.389708       1 config.go:315] "Starting node config controller"
	I0318 22:40:57.389714       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 22:40:57.489996       1 shared_informer.go:318] Caches are synced for node config
	I0318 22:40:57.490156       1 shared_informer.go:318] Caches are synced for service config
	I0318 22:40:57.490176       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ff8846e9d0768c500f7056ee945d1d20bfb02db0f95b59582cea53d1d10700ad] <==
	W0318 22:40:40.404298       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 22:40:40.404441       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 22:40:40.404739       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0318 22:40:40.404850       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0318 22:40:40.405107       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 22:40:40.405216       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 22:40:40.405359       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0318 22:40:40.405517       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0318 22:40:41.290811       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 22:40:41.290878       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0318 22:40:41.328322       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 22:40:41.328372       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0318 22:40:41.333921       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0318 22:40:41.334155       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0318 22:40:41.418689       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0318 22:40:41.419762       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 22:40:41.445990       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0318 22:40:41.446525       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0318 22:40:41.531112       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0318 22:40:41.531373       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0318 22:40:41.571891       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0318 22:40:41.571942       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0318 22:40:41.779161       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 22:40:41.779754       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 22:40:44.992237       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 22:43:59 addons-935788 kubelet[1235]: I0318 22:43:59.873885    1235 scope.go:117] "RemoveContainer" containerID="e7f391dc93997527c35f8a4586c86b2a73b6be611e34adeba89bbb58453aa1ba"
	Mar 18 22:43:59 addons-935788 kubelet[1235]: I0318 22:43:59.877378    1235 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e10b970e3064d2a8d2552d1c744bcc66831fde1c18cd22eabde96f91bbc84ce"
	Mar 18 22:43:59 addons-935788 kubelet[1235]: I0318 22:43:59.893255    1235 scope.go:117] "RemoveContainer" containerID="0b165fc5a15c678cdfd78e9d53b6435668f88971c24b1ac08a13b25216548157"
	Mar 18 22:43:59 addons-935788 kubelet[1235]: I0318 22:43:59.915285    1235 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cgroup\" (UniqueName: \"kubernetes.io/host-path/89cfa688-5a2b-4840-8293-eddaaea69238-cgroup\") pod \"89cfa688-5a2b-4840-8293-eddaaea69238\" (UID: \"89cfa688-5a2b-4840-8293-eddaaea69238\") "
	Mar 18 22:43:59 addons-935788 kubelet[1235]: I0318 22:43:59.915338    1235 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/89cfa688-5a2b-4840-8293-eddaaea69238-host\") pod \"89cfa688-5a2b-4840-8293-eddaaea69238\" (UID: \"89cfa688-5a2b-4840-8293-eddaaea69238\") "
	Mar 18 22:43:59 addons-935788 kubelet[1235]: I0318 22:43:59.915367    1235 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/89cfa688-5a2b-4840-8293-eddaaea69238-debugfs\") pod \"89cfa688-5a2b-4840-8293-eddaaea69238\" (UID: \"89cfa688-5a2b-4840-8293-eddaaea69238\") "
	Mar 18 22:43:59 addons-935788 kubelet[1235]: I0318 22:43:59.915389    1235 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/89cfa688-5a2b-4840-8293-eddaaea69238-run\") pod \"89cfa688-5a2b-4840-8293-eddaaea69238\" (UID: \"89cfa688-5a2b-4840-8293-eddaaea69238\") "
	Mar 18 22:43:59 addons-935788 kubelet[1235]: I0318 22:43:59.915435    1235 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"modules\" (UniqueName: \"kubernetes.io/host-path/89cfa688-5a2b-4840-8293-eddaaea69238-modules\") pod \"89cfa688-5a2b-4840-8293-eddaaea69238\" (UID: \"89cfa688-5a2b-4840-8293-eddaaea69238\") "
	Mar 18 22:43:59 addons-935788 kubelet[1235]: I0318 22:43:59.915457    1235 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hg9gd\" (UniqueName: \"kubernetes.io/projected/89cfa688-5a2b-4840-8293-eddaaea69238-kube-api-access-hg9gd\") pod \"89cfa688-5a2b-4840-8293-eddaaea69238\" (UID: \"89cfa688-5a2b-4840-8293-eddaaea69238\") "
	Mar 18 22:43:59 addons-935788 kubelet[1235]: I0318 22:43:59.915473    1235 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/89cfa688-5a2b-4840-8293-eddaaea69238-bpffs\") pod \"89cfa688-5a2b-4840-8293-eddaaea69238\" (UID: \"89cfa688-5a2b-4840-8293-eddaaea69238\") "
	Mar 18 22:43:59 addons-935788 kubelet[1235]: I0318 22:43:59.915540    1235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89cfa688-5a2b-4840-8293-eddaaea69238-bpffs" (OuterVolumeSpecName: "bpffs") pod "89cfa688-5a2b-4840-8293-eddaaea69238" (UID: "89cfa688-5a2b-4840-8293-eddaaea69238"). InnerVolumeSpecName "bpffs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Mar 18 22:43:59 addons-935788 kubelet[1235]: I0318 22:43:59.915647    1235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89cfa688-5a2b-4840-8293-eddaaea69238-cgroup" (OuterVolumeSpecName: "cgroup") pod "89cfa688-5a2b-4840-8293-eddaaea69238" (UID: "89cfa688-5a2b-4840-8293-eddaaea69238"). InnerVolumeSpecName "cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Mar 18 22:43:59 addons-935788 kubelet[1235]: I0318 22:43:59.915697    1235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89cfa688-5a2b-4840-8293-eddaaea69238-host" (OuterVolumeSpecName: "host") pod "89cfa688-5a2b-4840-8293-eddaaea69238" (UID: "89cfa688-5a2b-4840-8293-eddaaea69238"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Mar 18 22:43:59 addons-935788 kubelet[1235]: I0318 22:43:59.915712    1235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89cfa688-5a2b-4840-8293-eddaaea69238-debugfs" (OuterVolumeSpecName: "debugfs") pod "89cfa688-5a2b-4840-8293-eddaaea69238" (UID: "89cfa688-5a2b-4840-8293-eddaaea69238"). InnerVolumeSpecName "debugfs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Mar 18 22:43:59 addons-935788 kubelet[1235]: I0318 22:43:59.915725    1235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89cfa688-5a2b-4840-8293-eddaaea69238-run" (OuterVolumeSpecName: "run") pod "89cfa688-5a2b-4840-8293-eddaaea69238" (UID: "89cfa688-5a2b-4840-8293-eddaaea69238"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Mar 18 22:43:59 addons-935788 kubelet[1235]: I0318 22:43:59.915736    1235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89cfa688-5a2b-4840-8293-eddaaea69238-modules" (OuterVolumeSpecName: "modules") pod "89cfa688-5a2b-4840-8293-eddaaea69238" (UID: "89cfa688-5a2b-4840-8293-eddaaea69238"). InnerVolumeSpecName "modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Mar 18 22:43:59 addons-935788 kubelet[1235]: I0318 22:43:59.920158    1235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89cfa688-5a2b-4840-8293-eddaaea69238-kube-api-access-hg9gd" (OuterVolumeSpecName: "kube-api-access-hg9gd") pod "89cfa688-5a2b-4840-8293-eddaaea69238" (UID: "89cfa688-5a2b-4840-8293-eddaaea69238"). InnerVolumeSpecName "kube-api-access-hg9gd". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 18 22:44:00 addons-935788 kubelet[1235]: I0318 22:44:00.016877    1235 reconciler_common.go:300] "Volume detached for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/89cfa688-5a2b-4840-8293-eddaaea69238-debugfs\") on node \"addons-935788\" DevicePath \"\""
	Mar 18 22:44:00 addons-935788 kubelet[1235]: I0318 22:44:00.016929    1235 reconciler_common.go:300] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/89cfa688-5a2b-4840-8293-eddaaea69238-run\") on node \"addons-935788\" DevicePath \"\""
	Mar 18 22:44:00 addons-935788 kubelet[1235]: I0318 22:44:00.016942    1235 reconciler_common.go:300] "Volume detached for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/89cfa688-5a2b-4840-8293-eddaaea69238-bpffs\") on node \"addons-935788\" DevicePath \"\""
	Mar 18 22:44:00 addons-935788 kubelet[1235]: I0318 22:44:00.016951    1235 reconciler_common.go:300] "Volume detached for volume \"modules\" (UniqueName: \"kubernetes.io/host-path/89cfa688-5a2b-4840-8293-eddaaea69238-modules\") on node \"addons-935788\" DevicePath \"\""
	Mar 18 22:44:00 addons-935788 kubelet[1235]: I0318 22:44:00.016962    1235 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-hg9gd\" (UniqueName: \"kubernetes.io/projected/89cfa688-5a2b-4840-8293-eddaaea69238-kube-api-access-hg9gd\") on node \"addons-935788\" DevicePath \"\""
	Mar 18 22:44:00 addons-935788 kubelet[1235]: I0318 22:44:00.016971    1235 reconciler_common.go:300] "Volume detached for volume \"cgroup\" (UniqueName: \"kubernetes.io/host-path/89cfa688-5a2b-4840-8293-eddaaea69238-cgroup\") on node \"addons-935788\" DevicePath \"\""
	Mar 18 22:44:00 addons-935788 kubelet[1235]: I0318 22:44:00.016979    1235 reconciler_common.go:300] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/89cfa688-5a2b-4840-8293-eddaaea69238-host\") on node \"addons-935788\" DevicePath \"\""
	Mar 18 22:44:01 addons-935788 kubelet[1235]: I0318 22:44:01.444951    1235 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89cfa688-5a2b-4840-8293-eddaaea69238" path="/var/lib/kubelet/pods/89cfa688-5a2b-4840-8293-eddaaea69238/volumes"
	
	
	==> storage-provisioner [3fa9f07cb115a573fbd12e608693c75e5d844c355a081d105114529399cf04c2] <==
	I0318 22:41:04.859499       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0318 22:41:04.898302       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0318 22:41:04.898359       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0318 22:41:05.055950       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0318 22:41:05.056166       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-935788_c6d3b0ca-5f25-47ee-835c-89eb0ef8a026!
	I0318 22:41:05.057189       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b0c7c604-3410-4754-9099-fda1be880d26", APIVersion:"v1", ResourceVersion:"654", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-935788_c6d3b0ca-5f25-47ee-835c-89eb0ef8a026 became leader
	I0318 22:41:05.159709       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-935788_c6d3b0ca-5f25-47ee-835c-89eb0ef8a026!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-935788 -n addons-935788
helpers_test.go:261: (dbg) Run:  kubectl --context addons-935788 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-cmfc7 ingress-nginx-admission-patch-g2wp4 helper-pod-delete-pvc-dc5ffa50-03c1-493c-ae58-8e564d4e9229
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-935788 describe pod ingress-nginx-admission-create-cmfc7 ingress-nginx-admission-patch-g2wp4 helper-pod-delete-pvc-dc5ffa50-03c1-493c-ae58-8e564d4e9229
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-935788 describe pod ingress-nginx-admission-create-cmfc7 ingress-nginx-admission-patch-g2wp4 helper-pod-delete-pvc-dc5ffa50-03c1-493c-ae58-8e564d4e9229: exit status 1 (59.383731ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-cmfc7" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-g2wp4" not found
	Error from server (NotFound): pods "helper-pod-delete-pvc-dc5ffa50-03c1-493c-ae58-8e564d4e9229" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-935788 describe pod ingress-nginx-admission-create-cmfc7 ingress-nginx-admission-patch-g2wp4 helper-pod-delete-pvc-dc5ffa50-03c1-493c-ae58-8e564d4e9229: exit status 1
--- FAIL: TestAddons/parallel/Registry (28.16s)

                                                
                                    

Test pass (293/333)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 65.97
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.29.3/json-events 18.57
13 TestDownloadOnly/v1.29.3/preload-exists 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.07
18 TestDownloadOnly/v1.29.3/DeleteAll 0.13
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 0.12
21 TestDownloadOnly/v1.30.0-beta.0/json-events 67.5
22 TestDownloadOnly/v1.30.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.30.0-beta.0/LogsDuration 0.07
27 TestDownloadOnly/v1.30.0-beta.0/DeleteAll 0.13
28 TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds 0.12
30 TestBinaryMirror 0.55
31 TestOffline 126.91
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 213.04
39 TestAddons/parallel/Ingress 22.93
40 TestAddons/parallel/InspektorGadget 11.01
41 TestAddons/parallel/MetricsServer 5.77
42 TestAddons/parallel/HelmTiller 19.91
44 TestAddons/parallel/CSI 43.1
45 TestAddons/parallel/Headlamp 13.94
46 TestAddons/parallel/CloudSpanner 5.57
47 TestAddons/parallel/LocalPath 66.31
48 TestAddons/parallel/NvidiaDevicePlugin 6.6
49 TestAddons/parallel/Yakd 5.01
52 TestAddons/serial/GCPAuth/Namespaces 0.11
53 TestAddons/StoppedEnableDisable 92.71
54 TestCertOptions 97.51
55 TestCertExpiration 305.4
57 TestForceSystemdFlag 96.63
58 TestForceSystemdEnv 86.31
60 TestKVMDriverInstallOrUpdate 15.63
64 TestErrorSpam/setup 47.77
65 TestErrorSpam/start 0.35
66 TestErrorSpam/status 0.75
67 TestErrorSpam/pause 1.65
68 TestErrorSpam/unpause 1.65
69 TestErrorSpam/stop 4.8
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 96.98
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 41.99
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.08
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.92
81 TestFunctional/serial/CacheCmd/cache/add_local 3.32
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
83 TestFunctional/serial/CacheCmd/cache/list 0.05
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.82
86 TestFunctional/serial/CacheCmd/cache/delete 0.11
87 TestFunctional/serial/MinikubeKubectlCmd 0.11
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
89 TestFunctional/serial/ExtraConfig 47.35
90 TestFunctional/serial/ComponentHealth 0.06
91 TestFunctional/serial/LogsCmd 1.4
92 TestFunctional/serial/LogsFileCmd 1.48
93 TestFunctional/serial/InvalidService 4.71
95 TestFunctional/parallel/ConfigCmd 0.41
96 TestFunctional/parallel/DashboardCmd 22.85
97 TestFunctional/parallel/DryRun 0.28
98 TestFunctional/parallel/InternationalLanguage 0.14
99 TestFunctional/parallel/StatusCmd 0.93
103 TestFunctional/parallel/ServiceCmdConnect 11.61
104 TestFunctional/parallel/AddonsCmd 0.28
105 TestFunctional/parallel/PersistentVolumeClaim 51.24
107 TestFunctional/parallel/SSHCmd 0.39
108 TestFunctional/parallel/CpCmd 1.33
109 TestFunctional/parallel/MySQL 44.75
110 TestFunctional/parallel/FileSync 0.3
111 TestFunctional/parallel/CertSync 1.48
115 TestFunctional/parallel/NodeLabels 0.06
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.5
119 TestFunctional/parallel/License 0.79
120 TestFunctional/parallel/ServiceCmd/DeployApp 10.29
121 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
122 TestFunctional/parallel/MountCmd/any-port 10.59
123 TestFunctional/parallel/ProfileCmd/profile_list 0.29
124 TestFunctional/parallel/ProfileCmd/profile_json_output 0.32
125 TestFunctional/parallel/ServiceCmd/List 0.85
126 TestFunctional/parallel/MountCmd/specific-port 1.57
127 TestFunctional/parallel/ServiceCmd/JSONOutput 0.87
128 TestFunctional/parallel/ServiceCmd/HTTPS 0.34
129 TestFunctional/parallel/ServiceCmd/Format 0.4
130 TestFunctional/parallel/Version/short 0.08
131 TestFunctional/parallel/Version/components 0.68
132 TestFunctional/parallel/MountCmd/VerifyCleanup 1.52
133 TestFunctional/parallel/ServiceCmd/URL 0.35
134 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
135 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
136 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
137 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
138 TestFunctional/parallel/ImageCommands/ImageBuild 5.81
139 TestFunctional/parallel/ImageCommands/Setup 2.62
140 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
141 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
142 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.01
153 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.97
154 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.93
155 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.07
156 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
157 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.54
158 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.13
159 TestFunctional/delete_addon-resizer_images 0.07
160 TestFunctional/delete_my-image_image 0.01
161 TestFunctional/delete_minikube_cached_images 0.01
165 TestMultiControlPlane/serial/StartCluster 284
166 TestMultiControlPlane/serial/DeployApp 6.95
167 TestMultiControlPlane/serial/PingHostFromPods 1.31
168 TestMultiControlPlane/serial/AddWorkerNode 48.13
169 TestMultiControlPlane/serial/NodeLabels 0.07
170 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.55
171 TestMultiControlPlane/serial/CopyFile 13.15
172 TestMultiControlPlane/serial/StopSecondaryNode 92.39
173 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.4
174 TestMultiControlPlane/serial/RestartSecondaryNode 45.03
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.53
176 TestMultiControlPlane/serial/RestartClusterKeepsNodes 487.9
177 TestMultiControlPlane/serial/DeleteSecondaryNode 8.06
178 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.38
179 TestMultiControlPlane/serial/StopCluster 275.76
180 TestMultiControlPlane/serial/RestartCluster 159.02
181 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.37
182 TestMultiControlPlane/serial/AddSecondaryNode 74.43
183 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.57
187 TestJSONOutput/start/Command 99.53
188 TestJSONOutput/start/Audit 0
190 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/pause/Command 0.73
194 TestJSONOutput/pause/Audit 0
196 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/unpause/Command 0.65
200 TestJSONOutput/unpause/Audit 0
202 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/stop/Command 7.35
206 TestJSONOutput/stop/Audit 0
208 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
210 TestErrorJSONOutput 0.2
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 92.85
219 TestMountStart/serial/StartWithMountFirst 31.01
220 TestMountStart/serial/VerifyMountFirst 0.39
221 TestMountStart/serial/StartWithMountSecond 28.55
222 TestMountStart/serial/VerifyMountSecond 0.37
223 TestMountStart/serial/DeleteFirst 0.68
224 TestMountStart/serial/VerifyMountPostDelete 0.38
225 TestMountStart/serial/Stop 1.43
226 TestMountStart/serial/RestartStopped 24.18
227 TestMountStart/serial/VerifyMountPostStop 0.39
230 TestMultiNode/serial/FreshStart2Nodes 104.23
231 TestMultiNode/serial/DeployApp2Nodes 6.04
232 TestMultiNode/serial/PingHostFrom2Pods 0.84
233 TestMultiNode/serial/AddNode 42.47
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.22
236 TestMultiNode/serial/CopyFile 7.29
237 TestMultiNode/serial/StopNode 2.36
238 TestMultiNode/serial/StartAfterStop 26.02
239 TestMultiNode/serial/RestartKeepsNodes 294.85
240 TestMultiNode/serial/DeleteNode 2.14
241 TestMultiNode/serial/StopMultiNode 184.13
242 TestMultiNode/serial/RestartMultiNode 78.02
243 TestMultiNode/serial/ValidateNameConflict 45.53
248 TestPreload 379.17
250 TestScheduledStopUnix 118.16
254 TestRunningBinaryUpgrade 204.1
256 TestKubernetesUpgrade 199.93
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 100.67
261 TestNoKubernetes/serial/StartWithStopK8s 49.57
262 TestStoppedBinaryUpgrade/Setup 2.96
263 TestStoppedBinaryUpgrade/Upgrade 156.49
264 TestNoKubernetes/serial/Start 39.93
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.23
266 TestNoKubernetes/serial/ProfileList 5.59
267 TestNoKubernetes/serial/Stop 1.65
268 TestNoKubernetes/serial/StartNoArgs 41.28
276 TestNetworkPlugins/group/false 3.05
280 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
282 TestPause/serial/Start 118.44
283 TestStoppedBinaryUpgrade/MinikubeLogs 0.9
291 TestNetworkPlugins/group/auto/Start 127.15
292 TestNetworkPlugins/group/kindnet/Start 70.36
293 TestPause/serial/SecondStartNoReconfiguration 53.01
294 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
295 TestPause/serial/Pause 0.73
296 TestPause/serial/VerifyStatus 0.25
297 TestPause/serial/Unpause 0.7
298 TestPause/serial/PauseAgain 0.88
299 TestPause/serial/DeletePaused 0.8
300 TestPause/serial/VerifyDeletedResources 0.55
301 TestNetworkPlugins/group/calico/Start 103.76
302 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
303 TestNetworkPlugins/group/kindnet/NetCatPod 9.4
304 TestNetworkPlugins/group/auto/KubeletFlags 0.24
305 TestNetworkPlugins/group/auto/NetCatPod 10.31
306 TestNetworkPlugins/group/kindnet/DNS 0.2
307 TestNetworkPlugins/group/kindnet/Localhost 0.16
308 TestNetworkPlugins/group/kindnet/HairPin 0.15
309 TestNetworkPlugins/group/auto/DNS 0.16
310 TestNetworkPlugins/group/auto/Localhost 0.14
311 TestNetworkPlugins/group/auto/HairPin 0.16
312 TestNetworkPlugins/group/custom-flannel/Start 95.96
313 TestNetworkPlugins/group/bridge/Start 95.73
314 TestNetworkPlugins/group/flannel/Start 106.7
315 TestNetworkPlugins/group/calico/ControllerPod 6.01
316 TestNetworkPlugins/group/calico/KubeletFlags 0.22
317 TestNetworkPlugins/group/calico/NetCatPod 9.28
318 TestNetworkPlugins/group/calico/DNS 0.22
319 TestNetworkPlugins/group/calico/Localhost 0.17
320 TestNetworkPlugins/group/calico/HairPin 0.16
321 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
322 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.29
323 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
324 TestNetworkPlugins/group/bridge/NetCatPod 9.37
325 TestNetworkPlugins/group/custom-flannel/DNS 0.21
326 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
327 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
328 TestNetworkPlugins/group/bridge/DNS 0.18
329 TestNetworkPlugins/group/bridge/Localhost 0.18
330 TestNetworkPlugins/group/bridge/HairPin 0.17
331 TestNetworkPlugins/group/enable-default-cni/Start 73.22
333 TestStartStop/group/old-k8s-version/serial/FirstStart 176.83
335 TestStartStop/group/no-preload/serial/FirstStart 232.23
336 TestNetworkPlugins/group/flannel/ControllerPod 6.01
337 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
338 TestNetworkPlugins/group/flannel/NetCatPod 9.28
339 TestNetworkPlugins/group/flannel/DNS 0.35
340 TestNetworkPlugins/group/flannel/Localhost 0.19
341 TestNetworkPlugins/group/flannel/HairPin 0.17
342 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
343 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.29
345 TestStartStop/group/embed-certs/serial/FirstStart 107.73
346 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
347 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
348 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
350 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 66.28
351 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 12.31
352 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.24
353 TestStartStop/group/default-k8s-diff-port/serial/Stop 92.52
354 TestStartStop/group/embed-certs/serial/DeployApp 10.3
355 TestStartStop/group/old-k8s-version/serial/DeployApp 10.41
356 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.07
357 TestStartStop/group/embed-certs/serial/Stop 92.48
358 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1
359 TestStartStop/group/old-k8s-version/serial/Stop 92.49
360 TestStartStop/group/no-preload/serial/DeployApp 11.29
361 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.96
362 TestStartStop/group/no-preload/serial/Stop 92.47
363 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
364 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 297.1
365 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
366 TestStartStop/group/embed-certs/serial/SecondStart 326.31
367 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
368 TestStartStop/group/old-k8s-version/serial/SecondStart 599.96
369 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
370 TestStartStop/group/no-preload/serial/SecondStart 321.41
371 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
372 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
373 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
374 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.93
376 TestStartStop/group/newest-cni/serial/FirstStart 60.22
377 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 19.01
378 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
379 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
380 TestStartStop/group/embed-certs/serial/Pause 2.95
381 TestStartStop/group/newest-cni/serial/DeployApp 0
382 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.04
383 TestStartStop/group/newest-cni/serial/Stop 2.44
384 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
385 TestStartStop/group/newest-cni/serial/SecondStart 40.04
386 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
387 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
388 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
389 TestStartStop/group/no-preload/serial/Pause 2.89
390 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
391 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
392 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
393 TestStartStop/group/newest-cni/serial/Pause 2.45
394 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
395 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
396 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
397 TestStartStop/group/old-k8s-version/serial/Pause 2.46
x
+
TestDownloadOnly/v1.20.0/json-events (65.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-442829 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-442829 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (1m5.965108315s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (65.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-442829
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-442829: exit status 85 (66.701844ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-442829 | jenkins | v1.32.0 | 18 Mar 24 22:37 UTC |          |
	|         | -p download-only-442829        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 22:37:27
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 22:37:27.024787   13750 out.go:291] Setting OutFile to fd 1 ...
	I0318 22:37:27.025004   13750 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 22:37:27.025013   13750 out.go:304] Setting ErrFile to fd 2...
	I0318 22:37:27.025017   13750 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 22:37:27.025192   13750 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17786-6465/.minikube/bin
	W0318 22:37:27.025290   13750 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17786-6465/.minikube/config/config.json: open /home/jenkins/minikube-integration/17786-6465/.minikube/config/config.json: no such file or directory
	I0318 22:37:27.025830   13750 out.go:298] Setting JSON to true
	I0318 22:37:27.026625   13750 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1190,"bootTime":1710800257,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 22:37:27.026682   13750 start.go:139] virtualization: kvm guest
	I0318 22:37:27.029230   13750 out.go:97] [download-only-442829] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 22:37:27.030619   13750 out.go:169] MINIKUBE_LOCATION=17786
	W0318 22:37:27.029333   13750 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/17786-6465/.minikube/cache/preloaded-tarball: no such file or directory
	I0318 22:37:27.029420   13750 notify.go:220] Checking for updates...
	I0318 22:37:27.033433   13750 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 22:37:27.034881   13750 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17786-6465/kubeconfig
	I0318 22:37:27.036214   13750 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17786-6465/.minikube
	I0318 22:37:27.037615   13750 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0318 22:37:27.040028   13750 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0318 22:37:27.040250   13750 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 22:37:27.132337   13750 out.go:97] Using the kvm2 driver based on user configuration
	I0318 22:37:27.132372   13750 start.go:297] selected driver: kvm2
	I0318 22:37:27.132388   13750 start.go:901] validating driver "kvm2" against <nil>
	I0318 22:37:27.132713   13750 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 22:37:27.132831   13750 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17786-6465/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 22:37:27.146026   13750 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 22:37:27.146064   13750 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 22:37:27.146497   13750 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0318 22:37:27.146634   13750 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 22:37:27.146687   13750 cni.go:84] Creating CNI manager for ""
	I0318 22:37:27.146700   13750 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0318 22:37:27.146707   13750 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 22:37:27.146750   13750 start.go:340] cluster config:
	{Name:download-only-442829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-442829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 22:37:27.146905   13750 iso.go:125] acquiring lock: {Name:mk80345eb1a53e1b6e30e36ffde20e6b42fffb9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 22:37:27.148624   13750 out.go:97] Downloading VM boot image ...
	I0318 22:37:27.148650   13750 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17786-6465/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso
	I0318 22:37:39.838729   13750 out.go:97] Starting "download-only-442829" primary control-plane node in "download-only-442829" cluster
	I0318 22:37:39.838757   13750 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0318 22:37:39.995517   13750 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0318 22:37:39.995550   13750 cache.go:56] Caching tarball of preloaded images
	I0318 22:37:39.995701   13750 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0318 22:37:39.997574   13750 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0318 22:37:39.997592   13750 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0318 22:37:40.151625   13750 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/17786-6465/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0318 22:37:58.930544   13750 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0318 22:37:58.930671   13750 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/17786-6465/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0318 22:37:59.827110   13750 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0318 22:37:59.827584   13750 profile.go:142] Saving config to /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/download-only-442829/config.json ...
	I0318 22:37:59.827641   13750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/download-only-442829/config.json: {Name:mka88b56c70afa3c5aad7370799a32ae347bcb7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:37:59.827835   13750 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0318 22:37:59.828068   13750 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17786-6465/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-442829 host does not exist
	  To start a cluster, run: "minikube start -p download-only-442829"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-442829
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (18.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-158809 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-158809 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (18.567882785s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (18.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-158809
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-158809: exit status 85 (65.991799ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-442829 | jenkins | v1.32.0 | 18 Mar 24 22:37 UTC |                     |
	|         | -p download-only-442829        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 18 Mar 24 22:38 UTC | 18 Mar 24 22:38 UTC |
	| delete  | -p download-only-442829        | download-only-442829 | jenkins | v1.32.0 | 18 Mar 24 22:38 UTC | 18 Mar 24 22:38 UTC |
	| start   | -o=json --download-only        | download-only-158809 | jenkins | v1.32.0 | 18 Mar 24 22:38 UTC |                     |
	|         | -p download-only-158809        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 22:38:33
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 22:38:33.311557   14060 out.go:291] Setting OutFile to fd 1 ...
	I0318 22:38:33.311672   14060 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 22:38:33.311682   14060 out.go:304] Setting ErrFile to fd 2...
	I0318 22:38:33.311689   14060 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 22:38:33.311882   14060 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17786-6465/.minikube/bin
	I0318 22:38:33.312458   14060 out.go:298] Setting JSON to true
	I0318 22:38:33.313263   14060 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1256,"bootTime":1710800257,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 22:38:33.313319   14060 start.go:139] virtualization: kvm guest
	I0318 22:38:33.315348   14060 out.go:97] [download-only-158809] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 22:38:33.316857   14060 out.go:169] MINIKUBE_LOCATION=17786
	I0318 22:38:33.315529   14060 notify.go:220] Checking for updates...
	I0318 22:38:33.319237   14060 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 22:38:33.320579   14060 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17786-6465/kubeconfig
	I0318 22:38:33.321791   14060 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17786-6465/.minikube
	I0318 22:38:33.322851   14060 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0318 22:38:33.324835   14060 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0318 22:38:33.325040   14060 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 22:38:33.354539   14060 out.go:97] Using the kvm2 driver based on user configuration
	I0318 22:38:33.354555   14060 start.go:297] selected driver: kvm2
	I0318 22:38:33.354560   14060 start.go:901] validating driver "kvm2" against <nil>
	I0318 22:38:33.354866   14060 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 22:38:33.354926   14060 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17786-6465/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 22:38:33.368345   14060 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 22:38:33.368390   14060 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 22:38:33.368807   14060 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0318 22:38:33.368926   14060 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 22:38:33.368978   14060 cni.go:84] Creating CNI manager for ""
	I0318 22:38:33.368990   14060 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0318 22:38:33.368999   14060 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 22:38:33.369044   14060 start.go:340] cluster config:
	{Name:download-only-158809 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:download-only-158809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 22:38:33.369120   14060 iso.go:125] acquiring lock: {Name:mk80345eb1a53e1b6e30e36ffde20e6b42fffb9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 22:38:33.370493   14060 out.go:97] Starting "download-only-158809" primary control-plane node in "download-only-158809" cluster
	I0318 22:38:33.370507   14060 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0318 22:38:33.522268   14060 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4
	I0318 22:38:33.522287   14060 cache.go:56] Caching tarball of preloaded images
	I0318 22:38:33.522400   14060 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0318 22:38:33.524022   14060 out.go:97] Downloading Kubernetes v1.29.3 preload ...
	I0318 22:38:33.524037   14060 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4 ...
	I0318 22:38:33.677519   14060 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4?checksum=md5:dcad3363f354722395d68e96a1f5de54 -> /home/jenkins/minikube-integration/17786-6465/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-158809 host does not exist
	  To start a cluster, run: "minikube start -p download-only-158809"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-158809
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/json-events (67.5s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-497199 --force --alsologtostderr --kubernetes-version=v1.30.0-beta.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-497199 --force --alsologtostderr --kubernetes-version=v1.30.0-beta.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (1m7.499517829s)
--- PASS: TestDownloadOnly/v1.30.0-beta.0/json-events (67.50s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-497199
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-497199: exit status 85 (67.769365ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-442829 | jenkins | v1.32.0 | 18 Mar 24 22:37 UTC |                     |
	|         | -p download-only-442829             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd      |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=containerd      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.32.0 | 18 Mar 24 22:38 UTC | 18 Mar 24 22:38 UTC |
	| delete  | -p download-only-442829             | download-only-442829 | jenkins | v1.32.0 | 18 Mar 24 22:38 UTC | 18 Mar 24 22:38 UTC |
	| start   | -o=json --download-only             | download-only-158809 | jenkins | v1.32.0 | 18 Mar 24 22:38 UTC |                     |
	|         | -p download-only-158809             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd      |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=containerd      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.32.0 | 18 Mar 24 22:38 UTC | 18 Mar 24 22:38 UTC |
	| delete  | -p download-only-158809             | download-only-158809 | jenkins | v1.32.0 | 18 Mar 24 22:38 UTC | 18 Mar 24 22:38 UTC |
	| start   | -o=json --download-only             | download-only-497199 | jenkins | v1.32.0 | 18 Mar 24 22:38 UTC |                     |
	|         | -p download-only-497199             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=containerd      |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=containerd      |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 22:38:52
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 22:38:52.200186   14251 out.go:291] Setting OutFile to fd 1 ...
	I0318 22:38:52.200432   14251 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 22:38:52.200440   14251 out.go:304] Setting ErrFile to fd 2...
	I0318 22:38:52.200444   14251 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 22:38:52.200611   14251 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17786-6465/.minikube/bin
	I0318 22:38:52.201117   14251 out.go:298] Setting JSON to true
	I0318 22:38:52.201853   14251 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1275,"bootTime":1710800257,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 22:38:52.201910   14251 start.go:139] virtualization: kvm guest
	I0318 22:38:52.203861   14251 out.go:97] [download-only-497199] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 22:38:52.205451   14251 out.go:169] MINIKUBE_LOCATION=17786
	I0318 22:38:52.203994   14251 notify.go:220] Checking for updates...
	I0318 22:38:52.208200   14251 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 22:38:52.209526   14251 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17786-6465/kubeconfig
	I0318 22:38:52.210633   14251 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17786-6465/.minikube
	I0318 22:38:52.211722   14251 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0318 22:38:52.214041   14251 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0318 22:38:52.214230   14251 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 22:38:52.243975   14251 out.go:97] Using the kvm2 driver based on user configuration
	I0318 22:38:52.244009   14251 start.go:297] selected driver: kvm2
	I0318 22:38:52.244015   14251 start.go:901] validating driver "kvm2" against <nil>
	I0318 22:38:52.244409   14251 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 22:38:52.244514   14251 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17786-6465/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 22:38:52.257894   14251 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 22:38:52.257934   14251 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 22:38:52.258392   14251 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0318 22:38:52.258521   14251 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 22:38:52.258574   14251 cni.go:84] Creating CNI manager for ""
	I0318 22:38:52.258598   14251 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0318 22:38:52.258605   14251 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 22:38:52.258645   14251 start.go:340] cluster config:
	{Name:download-only-497199 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:download-only-497199 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1
m0s}
	I0318 22:38:52.258722   14251 iso.go:125] acquiring lock: {Name:mk80345eb1a53e1b6e30e36ffde20e6b42fffb9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 22:38:52.260130   14251 out.go:97] Starting "download-only-497199" primary control-plane node in "download-only-497199" cluster
	I0318 22:38:52.260141   14251 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime containerd
	I0318 22:38:53.016757   14251 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-beta.0/preloaded-images-k8s-v18-v1.30.0-beta.0-containerd-overlay2-amd64.tar.lz4
	I0318 22:38:53.016781   14251 cache.go:56] Caching tarball of preloaded images
	I0318 22:38:53.016935   14251 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime containerd
	I0318 22:38:53.018500   14251 out.go:97] Downloading Kubernetes v1.30.0-beta.0 preload ...
	I0318 22:38:53.018514   14251 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-beta.0-containerd-overlay2-amd64.tar.lz4 ...
	I0318 22:38:53.168654   14251 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-beta.0/preloaded-images-k8s-v18-v1.30.0-beta.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:da32f15385f98142eac11fb4e1af2dd3 -> /home/jenkins/minikube-integration/17786-6465/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-containerd-overlay2-amd64.tar.lz4
	I0318 22:39:15.790450   14251 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-beta.0-containerd-overlay2-amd64.tar.lz4 ...
	I0318 22:39:15.790551   14251 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/17786-6465/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-containerd-overlay2-amd64.tar.lz4 ...
	I0318 22:39:16.540908   14251 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-beta.0 on containerd
	I0318 22:39:16.541294   14251 profile.go:142] Saving config to /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/download-only-497199/config.json ...
	I0318 22:39:16.541330   14251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/download-only-497199/config.json: {Name:mk693fbc3c6332f725fbaa6277cd41c9619973b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:39:16.541516   14251 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime containerd
	I0318 22:39:16.541721   14251 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0-beta.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17786-6465/.minikube/cache/linux/amd64/v1.30.0-beta.0/kubectl
	
	
	* The control-plane node download-only-497199 host does not exist
	  To start a cluster, run: "minikube start -p download-only-497199"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-beta.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0-beta.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-497199
--- PASS: TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-728007 --alsologtostderr --binary-mirror http://127.0.0.1:44837 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-728007" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-728007
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestOffline (126.91s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-698998 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-698998 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (2m5.88290646s)
helpers_test.go:175: Cleaning up "offline-containerd-698998" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-698998
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-698998: (1.027752817s)
--- PASS: TestOffline (126.91s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-935788
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-935788: exit status 85 (67.423743ms)

                                                
                                                
-- stdout --
	* Profile "addons-935788" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-935788"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-935788
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-935788: exit status 85 (65.86204ms)

                                                
                                                
-- stdout --
	* Profile "addons-935788" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-935788"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (213.04s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-935788 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-935788 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m33.037314334s)
--- PASS: TestAddons/Setup (213.04s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (22.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-935788 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-935788 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-935788 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [6d24439b-0e5b-4f8e-907a-2b0a1c415ed1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [6d24439b-0e5b-4f8e-907a-2b0a1c415ed1] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.004291487s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-935788 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-935788 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-935788 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.13
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-935788 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-935788 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-935788 addons disable ingress --alsologtostderr -v=1: (7.748720431s)
--- PASS: TestAddons/parallel/Ingress (22.93s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.01s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-2s55v" [89cfa688-5a2b-4840-8293-eddaaea69238] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.008871189s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-935788
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-935788: (6.003094699s)
--- PASS: TestAddons/parallel/InspektorGadget (11.01s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.77s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 2.991022ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-f486p" [1185426d-ada6-4c7a-aeff-3481c9cfb03d] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005144435s
addons_test.go:415: (dbg) Run:  kubectl --context addons-935788 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-935788 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.77s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (19.91s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 23.028014ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-plx4j" [10772ad2-52ab-449b-bd96-23b29d59b221] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.102063169s
addons_test.go:473: (dbg) Run:  kubectl --context addons-935788 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-935788 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (13.889349212s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-935788 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (19.91s)

                                                
                                    
x
+
TestAddons/parallel/CSI (43.1s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 22.375634ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-935788 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935788 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935788 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935788 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935788 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-935788 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [58d7cd50-00f2-47e5-9afb-6fd7e2256134] Pending
helpers_test.go:344: "task-pv-pod" [58d7cd50-00f2-47e5-9afb-6fd7e2256134] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [58d7cd50-00f2-47e5-9afb-6fd7e2256134] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.004999802s
addons_test.go:584: (dbg) Run:  kubectl --context addons-935788 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-935788 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-935788 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-935788 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-935788 delete pod task-pv-pod: (1.423997257s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-935788 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-935788 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935788 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935788 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935788 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935788 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935788 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-935788 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
2024/03/18 22:43:59 [DEBUG] GET http://192.168.39.13:5000
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [15913949-3374-4364-8818-e5dda6399f18] Pending
helpers_test.go:344: "task-pv-pod-restore" [15913949-3374-4364-8818-e5dda6399f18] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [15913949-3374-4364-8818-e5dda6399f18] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.00481548s
addons_test.go:626: (dbg) Run:  kubectl --context addons-935788 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-935788 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-935788 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-935788 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-935788 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.952589623s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-935788 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (43.10s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-935788 --alsologtostderr -v=1
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5485c556b-qbv5q" [e4a57f35-9d50-4542-ada8-4a51f2f1ab19] Pending
helpers_test.go:344: "headlamp-5485c556b-qbv5q" [e4a57f35-9d50-4542-ada8-4a51f2f1ab19] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5485c556b-qbv5q" [e4a57f35-9d50-4542-ada8-4a51f2f1ab19] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004102372s
--- PASS: TestAddons/parallel/Headlamp (13.94s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5446596998-vntjl" [78c163c1-62ea-4f18-98f3-5b364292e428] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004038146s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-935788
--- PASS: TestAddons/parallel/CloudSpanner (5.57s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (66.31s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-935788 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-935788 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935788 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935788 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935788 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935788 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935788 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935788 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935788 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935788 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-935788 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [b5934d40-2775-48ef-ab22-b6ccabddb8f0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [b5934d40-2775-48ef-ab22-b6ccabddb8f0] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [b5934d40-2775-48ef-ab22-b6ccabddb8f0] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 14.004289264s
addons_test.go:891: (dbg) Run:  kubectl --context addons-935788 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-935788 ssh "cat /opt/local-path-provisioner/pvc-dc5ffa50-03c1-493c-ae58-8e564d4e9229_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-935788 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-935788 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-935788 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-935788 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.446200502s)
--- PASS: TestAddons/parallel/LocalPath (66.31s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.6s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-mhwxb" [8c3d0ec7-cc92-4284-af18-057f03d243ae] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005010924s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-935788
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.60s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-gs5mh" [2c3acf6c-29f3-4f09-9e6c-9af978d1cb06] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005625169s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-935788 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-935788 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.71s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-935788
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-935788: (1m32.42131261s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-935788
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-935788
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-935788
--- PASS: TestAddons/StoppedEnableDisable (92.71s)

                                                
                                    
x
+
TestCertOptions (97.51s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-738437 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
E0318 23:48:34.005101   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-738437 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m35.984942024s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-738437 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-738437 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-738437 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-738437" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-738437
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-738437: (1.058268391s)
--- PASS: TestCertOptions (97.51s)

                                                
                                    
x
+
TestCertExpiration (305.4s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-275531 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-275531 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m20.761093741s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-275531 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-275531 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (43.483290319s)
helpers_test.go:175: Cleaning up "cert-expiration-275531" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-275531
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-275531: (1.153296015s)
--- PASS: TestCertExpiration (305.40s)

                                                
                                    
x
+
TestForceSystemdFlag (96.63s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-412361 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-412361 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m35.621194102s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-412361 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-412361" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-412361
--- PASS: TestForceSystemdFlag (96.63s)

                                                
                                    
x
+
TestForceSystemdEnv (86.31s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-928981 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-928981 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m25.094379638s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-928981 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-928981" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-928981
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-928981: (1.001527217s)
--- PASS: TestForceSystemdEnv (86.31s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (15.63s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (15.63s)

                                                
                                    
x
+
TestErrorSpam/setup (47.77s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-473872 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-473872 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-473872 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-473872 --driver=kvm2  --container-runtime=containerd: (47.766271995s)
--- PASS: TestErrorSpam/setup (47.77s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-473872 --log_dir /tmp/nospam-473872 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-473872 --log_dir /tmp/nospam-473872 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-473872 --log_dir /tmp/nospam-473872 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-473872 --log_dir /tmp/nospam-473872 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-473872 --log_dir /tmp/nospam-473872 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-473872 --log_dir /tmp/nospam-473872 status
--- PASS: TestErrorSpam/status (0.75s)

                                                
                                    
x
+
TestErrorSpam/pause (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-473872 --log_dir /tmp/nospam-473872 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-473872 --log_dir /tmp/nospam-473872 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-473872 --log_dir /tmp/nospam-473872 pause
--- PASS: TestErrorSpam/pause (1.65s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-473872 --log_dir /tmp/nospam-473872 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-473872 --log_dir /tmp/nospam-473872 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-473872 --log_dir /tmp/nospam-473872 unpause
--- PASS: TestErrorSpam/unpause (1.65s)

                                                
                                    
x
+
TestErrorSpam/stop (4.8s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-473872 --log_dir /tmp/nospam-473872 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-473872 --log_dir /tmp/nospam-473872 stop: (1.573248887s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-473872 --log_dir /tmp/nospam-473872 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-473872 --log_dir /tmp/nospam-473872 stop: (1.966940185s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-473872 --log_dir /tmp/nospam-473872 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-473872 --log_dir /tmp/nospam-473872 stop: (1.264145766s)
--- PASS: TestErrorSpam/stop (4.80s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17786-6465/.minikube/files/etc/test/nested/copy/13738/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (96.98s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-522698 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E0318 22:48:34.004644   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.crt: no such file or directory
E0318 22:48:34.010598   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.crt: no such file or directory
E0318 22:48:34.020822   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.crt: no such file or directory
E0318 22:48:34.041031   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.crt: no such file or directory
E0318 22:48:34.081261   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.crt: no such file or directory
E0318 22:48:34.161523   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.crt: no such file or directory
E0318 22:48:34.321842   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.crt: no such file or directory
E0318 22:48:34.642369   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.crt: no such file or directory
E0318 22:48:35.283279   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.crt: no such file or directory
E0318 22:48:36.563740   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.crt: no such file or directory
E0318 22:48:39.124497   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.crt: no such file or directory
E0318 22:48:44.244878   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.crt: no such file or directory
E0318 22:48:54.486022   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-522698 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m36.976364156s)
--- PASS: TestFunctional/serial/StartWithProxy (96.98s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.99s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-522698 --alsologtostderr -v=8
E0318 22:49:14.966480   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-522698 --alsologtostderr -v=8: (41.992745318s)
functional_test.go:659: soft start took 41.993260794s for "functional-522698" cluster.
--- PASS: TestFunctional/serial/SoftStart (41.99s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-522698 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.92s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-522698 cache add registry.k8s.io/pause:3.1: (1.323122532s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-522698 cache add registry.k8s.io/pause:3.3: (1.295293223s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-522698 cache add registry.k8s.io/pause:latest: (1.301505235s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.92s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (3.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-522698 /tmp/TestFunctionalserialCacheCmdcacheadd_local3052699759/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 cache add minikube-local-cache-test:functional-522698
E0318 22:49:55.928177   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.crt: no such file or directory
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-522698 cache add minikube-local-cache-test:functional-522698: (2.973573388s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 cache delete minikube-local-cache-test:functional-522698
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-522698
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (3.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.82s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-522698 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (217.142462ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-522698 cache reload: (1.123856764s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.82s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 kubectl -- --context functional-522698 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-522698 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (47.35s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-522698 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-522698 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (47.351748225s)
functional_test.go:757: restart took 47.351885789s for "functional-522698" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (47.35s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-522698 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-522698 logs: (1.395371783s)
--- PASS: TestFunctional/serial/LogsCmd (1.40s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 logs --file /tmp/TestFunctionalserialLogsFileCmd3584481881/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-522698 logs --file /tmp/TestFunctionalserialLogsFileCmd3584481881/001/logs.txt: (1.482385598s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.71s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-522698 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-522698
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-522698: exit status 115 (286.179186ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.12:30303 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-522698 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-522698 delete -f testdata/invalidsvc.yaml: (1.220422158s)
--- PASS: TestFunctional/serial/InvalidService (4.71s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-522698 config get cpus: exit status 14 (78.640979ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-522698 config get cpus: exit status 14 (62.23412ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (22.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-522698 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-522698 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 20360: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (22.85s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-522698 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-522698 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (135.339131ms)

                                                
                                                
-- stdout --
	* [functional-522698] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17786
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17786-6465/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17786-6465/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 22:50:56.885393   20268 out.go:291] Setting OutFile to fd 1 ...
	I0318 22:50:56.885495   20268 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 22:50:56.885504   20268 out.go:304] Setting ErrFile to fd 2...
	I0318 22:50:56.885508   20268 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 22:50:56.885702   20268 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17786-6465/.minikube/bin
	I0318 22:50:56.886183   20268 out.go:298] Setting JSON to false
	I0318 22:50:56.887007   20268 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2000,"bootTime":1710800257,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 22:50:56.887064   20268 start.go:139] virtualization: kvm guest
	I0318 22:50:56.889262   20268 out.go:177] * [functional-522698] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 22:50:56.890798   20268 notify.go:220] Checking for updates...
	I0318 22:50:56.892203   20268 out.go:177]   - MINIKUBE_LOCATION=17786
	I0318 22:50:56.893393   20268 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 22:50:56.894532   20268 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17786-6465/kubeconfig
	I0318 22:50:56.895722   20268 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17786-6465/.minikube
	I0318 22:50:56.896756   20268 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 22:50:56.897962   20268 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 22:50:56.899518   20268 config.go:182] Loaded profile config "functional-522698": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0318 22:50:56.899917   20268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:50:56.899968   20268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:50:56.914540   20268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41001
	I0318 22:50:56.914872   20268 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:50:56.915404   20268 main.go:141] libmachine: Using API Version  1
	I0318 22:50:56.915422   20268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:50:56.915758   20268 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:50:56.915942   20268 main.go:141] libmachine: (functional-522698) Calling .DriverName
	I0318 22:50:56.916194   20268 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 22:50:56.916584   20268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:50:56.916631   20268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:50:56.930708   20268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39579
	I0318 22:50:56.931033   20268 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:50:56.931483   20268 main.go:141] libmachine: Using API Version  1
	I0318 22:50:56.931500   20268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:50:56.931785   20268 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:50:56.931960   20268 main.go:141] libmachine: (functional-522698) Calling .DriverName
	I0318 22:50:56.961945   20268 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 22:50:56.963138   20268 start.go:297] selected driver: kvm2
	I0318 22:50:56.963148   20268 start.go:901] validating driver "kvm2" against &{Name:functional-522698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:functional-522698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 22:50:56.963267   20268 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 22:50:56.965379   20268 out.go:177] 
	W0318 22:50:56.966701   20268 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0318 22:50:56.968326   20268 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-522698 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-522698 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-522698 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (137.634979ms)

                                                
                                                
-- stdout --
	* [functional-522698] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17786
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17786-6465/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17786-6465/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 22:50:56.749951   20240 out.go:291] Setting OutFile to fd 1 ...
	I0318 22:50:56.750065   20240 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 22:50:56.750073   20240 out.go:304] Setting ErrFile to fd 2...
	I0318 22:50:56.750078   20240 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 22:50:56.750405   20240 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17786-6465/.minikube/bin
	I0318 22:50:56.750961   20240 out.go:298] Setting JSON to false
	I0318 22:50:56.751796   20240 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2000,"bootTime":1710800257,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 22:50:56.751856   20240 start.go:139] virtualization: kvm guest
	I0318 22:50:56.754216   20240 out.go:177] * [functional-522698] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0318 22:50:56.755686   20240 out.go:177]   - MINIKUBE_LOCATION=17786
	I0318 22:50:56.756933   20240 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 22:50:56.755695   20240 notify.go:220] Checking for updates...
	I0318 22:50:56.759390   20240 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17786-6465/kubeconfig
	I0318 22:50:56.760670   20240 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17786-6465/.minikube
	I0318 22:50:56.761845   20240 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 22:50:56.763002   20240 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 22:50:56.764461   20240 config.go:182] Loaded profile config "functional-522698": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0318 22:50:56.764879   20240 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:50:56.764921   20240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:50:56.779427   20240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40049
	I0318 22:50:56.779756   20240 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:50:56.780226   20240 main.go:141] libmachine: Using API Version  1
	I0318 22:50:56.780259   20240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:50:56.780632   20240 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:50:56.780834   20240 main.go:141] libmachine: (functional-522698) Calling .DriverName
	I0318 22:50:56.781064   20240 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 22:50:56.781314   20240 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:50:56.781347   20240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:50:56.795651   20240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34741
	I0318 22:50:56.795995   20240 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:50:56.796430   20240 main.go:141] libmachine: Using API Version  1
	I0318 22:50:56.796451   20240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:50:56.796765   20240 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:50:56.796942   20240 main.go:141] libmachine: (functional-522698) Calling .DriverName
	I0318 22:50:56.827190   20240 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0318 22:50:56.828555   20240 start.go:297] selected driver: kvm2
	I0318 22:50:56.828573   20240 start.go:901] validating driver "kvm2" against &{Name:functional-522698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:functional-522698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 22:50:56.828651   20240 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 22:50:56.830641   20240 out.go:177] 
	W0318 22:50:56.831855   20240 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0318 22:50:56.832987   20240 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-522698 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-522698 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-z88fz" [6738e3b3-c132-4481-a87a-ee23d6e24fe1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-z88fz" [6738e3b3-c132-4481-a87a-ee23d6e24fe1] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.005060979s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.12:32097
functional_test.go:1671: http://192.168.39.12:32097: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-z88fz

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.12:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.12:32097
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.61s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (51.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [bed0883d-92cb-4e5a-bdeb-3f0504908be5] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00525364s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-522698 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-522698 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-522698 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-522698 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-522698 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [cc4f7dc3-1c17-487d-ba5a-25053f3a99a0] Pending
helpers_test.go:344: "sp-pod" [cc4f7dc3-1c17-487d-ba5a-25053f3a99a0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
2024/03/18 22:51:19 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "sp-pod" [cc4f7dc3-1c17-487d-ba5a-25053f3a99a0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 36.005276304s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-522698 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-522698 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-522698 delete -f testdata/storage-provisioner/pod.yaml: (1.224011879s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-522698 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [06885c5c-b31d-4559-8f2e-16844734be68] Pending
helpers_test.go:344: "sp-pod" [06885c5c-b31d-4559-8f2e-16844734be68] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [06885c5c-b31d-4559-8f2e-16844734be68] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.005419546s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-522698 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (51.24s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 ssh -n functional-522698 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 cp functional-522698:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd72135399/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 ssh -n functional-522698 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 ssh -n functional-522698 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (44.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-522698 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-vbh7c" [557d6a7a-aab5-4763-a757-7bf3f2bb7d3d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-vbh7c" [557d6a7a-aab5-4763-a757-7bf3f2bb7d3d] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 38.007330745s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-522698 exec mysql-859648c796-vbh7c -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-522698 exec mysql-859648c796-vbh7c -- mysql -ppassword -e "show databases;": exit status 1 (218.635886ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-522698 exec mysql-859648c796-vbh7c -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-522698 exec mysql-859648c796-vbh7c -- mysql -ppassword -e "show databases;": exit status 1 (206.519287ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-522698 exec mysql-859648c796-vbh7c -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-522698 exec mysql-859648c796-vbh7c -- mysql -ppassword -e "show databases;": exit status 1 (124.962622ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-522698 exec mysql-859648c796-vbh7c -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-522698 exec mysql-859648c796-vbh7c -- mysql -ppassword -e "show databases;": exit status 1 (151.190657ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-522698 exec mysql-859648c796-vbh7c -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (44.75s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/13738/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 ssh "sudo cat /etc/test/nested/copy/13738/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/13738.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 ssh "sudo cat /etc/ssl/certs/13738.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/13738.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 ssh "sudo cat /usr/share/ca-certificates/13738.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/137382.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 ssh "sudo cat /etc/ssl/certs/137382.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/137382.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 ssh "sudo cat /usr/share/ca-certificates/137382.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-522698 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-522698 ssh "sudo systemctl is-active docker": exit status 1 (227.352636ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-522698 ssh "sudo systemctl is-active crio": exit status 1 (269.206798ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-522698 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-522698 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-p5t7g" [f99cbb05-09b0-4978-9efb-c50f86229dce] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-p5t7g" [f99cbb05-09b0-4978-9efb-c50f86229dce] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.003967395s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-522698 /tmp/TestFunctionalparallelMountCmdany-port4236546537/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1710802254890741213" to /tmp/TestFunctionalparallelMountCmdany-port4236546537/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1710802254890741213" to /tmp/TestFunctionalparallelMountCmdany-port4236546537/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1710802254890741213" to /tmp/TestFunctionalparallelMountCmdany-port4236546537/001/test-1710802254890741213
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-522698 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (277.945435ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 18 22:50 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 18 22:50 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 18 22:50 test-1710802254890741213
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 ssh cat /mount-9p/test-1710802254890741213
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-522698 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e2460bff-26d2-4807-9009-50c203cdcdad] Pending
helpers_test.go:344: "busybox-mount" [e2460bff-26d2-4807-9009-50c203cdcdad] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e2460bff-26d2-4807-9009-50c203cdcdad] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e2460bff-26d2-4807-9009-50c203cdcdad] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.004142107s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-522698 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-522698 /tmp/TestFunctionalparallelMountCmdany-port4236546537/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.59s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "236.057597ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "53.32256ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "262.858508ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "58.152869ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-522698 /tmp/TestFunctionalparallelMountCmdspecific-port2885839753/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-522698 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (234.220496ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-522698 /tmp/TestFunctionalparallelMountCmdspecific-port2885839753/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-522698 ssh "sudo umount -f /mount-9p": exit status 1 (226.475365ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-522698 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-522698 /tmp/TestFunctionalparallelMountCmdspecific-port2885839753/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 service list -o json
functional_test.go:1490: Took "866.407542ms" to run "out/minikube-linux-amd64 -p functional-522698 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.12:30428
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-522698 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2615626688/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-522698 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2615626688/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-522698 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2615626688/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-522698 ssh "findmnt -T" /mount1: exit status 1 (264.930461ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-522698 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-522698 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2615626688/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-522698 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2615626688/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-522698 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2615626688/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.12:30428
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-522698 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.29.3
registry.k8s.io/kube-proxy:v1.29.3
registry.k8s.io/kube-controller-manager:v1.29.3
registry.k8s.io/kube-apiserver:v1.29.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-522698
docker.io/library/minikube-local-cache-test:functional-522698
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-522698 image ls --format short --alsologtostderr:
I0318 22:51:29.269226   22148 out.go:291] Setting OutFile to fd 1 ...
I0318 22:51:29.269347   22148 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 22:51:29.269353   22148 out.go:304] Setting ErrFile to fd 2...
I0318 22:51:29.269360   22148 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 22:51:29.269611   22148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17786-6465/.minikube/bin
I0318 22:51:29.270360   22148 config.go:182] Loaded profile config "functional-522698": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0318 22:51:29.270514   22148 config.go:182] Loaded profile config "functional-522698": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0318 22:51:29.271075   22148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0318 22:51:29.271124   22148 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 22:51:29.285385   22148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42601
I0318 22:51:29.285726   22148 main.go:141] libmachine: () Calling .GetVersion
I0318 22:51:29.286273   22148 main.go:141] libmachine: Using API Version  1
I0318 22:51:29.286298   22148 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 22:51:29.286653   22148 main.go:141] libmachine: () Calling .GetMachineName
I0318 22:51:29.286888   22148 main.go:141] libmachine: (functional-522698) Calling .GetState
I0318 22:51:29.289119   22148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0318 22:51:29.289182   22148 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 22:51:29.302043   22148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33719
I0318 22:51:29.302572   22148 main.go:141] libmachine: () Calling .GetVersion
I0318 22:51:29.302908   22148 main.go:141] libmachine: Using API Version  1
I0318 22:51:29.302923   22148 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 22:51:29.303266   22148 main.go:141] libmachine: () Calling .GetMachineName
I0318 22:51:29.303419   22148 main.go:141] libmachine: (functional-522698) Calling .DriverName
I0318 22:51:29.303626   22148 ssh_runner.go:195] Run: systemctl --version
I0318 22:51:29.303652   22148 main.go:141] libmachine: (functional-522698) Calling .GetSSHHostname
I0318 22:51:29.306048   22148 main.go:141] libmachine: (functional-522698) DBG | domain functional-522698 has defined MAC address 52:54:00:64:38:f0 in network mk-functional-522698
I0318 22:51:29.306427   22148 main.go:141] libmachine: (functional-522698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:38:f0", ip: ""} in network mk-functional-522698: {Iface:virbr1 ExpiryTime:2024-03-18 23:47:46 +0000 UTC Type:0 Mac:52:54:00:64:38:f0 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:functional-522698 Clientid:01:52:54:00:64:38:f0}
I0318 22:51:29.306456   22148 main.go:141] libmachine: (functional-522698) DBG | domain functional-522698 has defined IP address 192.168.39.12 and MAC address 52:54:00:64:38:f0 in network mk-functional-522698
I0318 22:51:29.306768   22148 main.go:141] libmachine: (functional-522698) Calling .GetSSHPort
I0318 22:51:29.306892   22148 main.go:141] libmachine: (functional-522698) Calling .GetSSHKeyPath
I0318 22:51:29.307069   22148 main.go:141] libmachine: (functional-522698) Calling .GetSSHUsername
I0318 22:51:29.307189   22148 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17786-6465/.minikube/machines/functional-522698/id_rsa Username:docker}
I0318 22:51:29.392531   22148 ssh_runner.go:195] Run: sudo crictl images --output json
I0318 22:51:29.458074   22148 main.go:141] libmachine: Making call to close driver server
I0318 22:51:29.458092   22148 main.go:141] libmachine: (functional-522698) Calling .Close
I0318 22:51:29.458413   22148 main.go:141] libmachine: Successfully made call to close driver server
I0318 22:51:29.458436   22148 main.go:141] libmachine: Making call to close connection to plugin binary
I0318 22:51:29.458446   22148 main.go:141] libmachine: Making call to close driver server
I0318 22:51:29.458458   22148 main.go:141] libmachine: (functional-522698) Calling .Close
I0318 22:51:29.458709   22148 main.go:141] libmachine: Successfully made call to close driver server
I0318 22:51:29.458722   22148 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-522698 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:cbb01a | 18.2MB |
| registry.k8s.io/kube-controller-manager     | v1.29.3            | sha256:6052a2 | 33.5MB |
| registry.k8s.io/kube-proxy                  | v1.29.3            | sha256:a1d263 | 28.4MB |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/etcd                        | 3.5.12-0           | sha256:3861cf | 57.2MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| gcr.io/google-containers/addon-resizer      | functional-522698  | sha256:ffd4cf | 10.8MB |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| docker.io/kindest/kindnetd                  | v20240202-8f1494ea | sha256:4950bb | 27.8MB |
| registry.k8s.io/kube-apiserver              | v1.29.3            | sha256:39f995 | 35.1MB |
| registry.k8s.io/kube-scheduler              | v1.29.3            | sha256:8c390d | 18.6MB |
| docker.io/library/minikube-local-cache-test | functional-522698  | sha256:432c1d | 1.01kB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-522698 image ls --format table --alsologtostderr:
I0318 22:51:29.765708   22261 out.go:291] Setting OutFile to fd 1 ...
I0318 22:51:29.765793   22261 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 22:51:29.765803   22261 out.go:304] Setting ErrFile to fd 2...
I0318 22:51:29.765807   22261 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 22:51:29.765988   22261 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17786-6465/.minikube/bin
I0318 22:51:29.766526   22261 config.go:182] Loaded profile config "functional-522698": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0318 22:51:29.766642   22261 config.go:182] Loaded profile config "functional-522698": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0318 22:51:29.767124   22261 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0318 22:51:29.767166   22261 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 22:51:29.783329   22261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45111
I0318 22:51:29.783764   22261 main.go:141] libmachine: () Calling .GetVersion
I0318 22:51:29.784276   22261 main.go:141] libmachine: Using API Version  1
I0318 22:51:29.784296   22261 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 22:51:29.784625   22261 main.go:141] libmachine: () Calling .GetMachineName
I0318 22:51:29.784825   22261 main.go:141] libmachine: (functional-522698) Calling .GetState
I0318 22:51:29.786379   22261 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0318 22:51:29.786410   22261 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 22:51:29.799808   22261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46131
I0318 22:51:29.800120   22261 main.go:141] libmachine: () Calling .GetVersion
I0318 22:51:29.800586   22261 main.go:141] libmachine: Using API Version  1
I0318 22:51:29.800621   22261 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 22:51:29.800935   22261 main.go:141] libmachine: () Calling .GetMachineName
I0318 22:51:29.801128   22261 main.go:141] libmachine: (functional-522698) Calling .DriverName
I0318 22:51:29.801297   22261 ssh_runner.go:195] Run: systemctl --version
I0318 22:51:29.801315   22261 main.go:141] libmachine: (functional-522698) Calling .GetSSHHostname
I0318 22:51:29.803725   22261 main.go:141] libmachine: (functional-522698) DBG | domain functional-522698 has defined MAC address 52:54:00:64:38:f0 in network mk-functional-522698
I0318 22:51:29.804138   22261 main.go:141] libmachine: (functional-522698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:38:f0", ip: ""} in network mk-functional-522698: {Iface:virbr1 ExpiryTime:2024-03-18 23:47:46 +0000 UTC Type:0 Mac:52:54:00:64:38:f0 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:functional-522698 Clientid:01:52:54:00:64:38:f0}
I0318 22:51:29.804169   22261 main.go:141] libmachine: (functional-522698) DBG | domain functional-522698 has defined IP address 192.168.39.12 and MAC address 52:54:00:64:38:f0 in network mk-functional-522698
I0318 22:51:29.804300   22261 main.go:141] libmachine: (functional-522698) Calling .GetSSHPort
I0318 22:51:29.804476   22261 main.go:141] libmachine: (functional-522698) Calling .GetSSHKeyPath
I0318 22:51:29.804653   22261 main.go:141] libmachine: (functional-522698) Calling .GetSSHUsername
I0318 22:51:29.804802   22261 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17786-6465/.minikube/machines/functional-522698/id_rsa Username:docker}
I0318 22:51:29.894320   22261 ssh_runner.go:195] Run: sudo crictl images --output json
I0318 22:51:29.945784   22261 main.go:141] libmachine: Making call to close driver server
I0318 22:51:29.945798   22261 main.go:141] libmachine: (functional-522698) Calling .Close
I0318 22:51:29.946116   22261 main.go:141] libmachine: (functional-522698) DBG | Closing plugin on server side
I0318 22:51:29.946120   22261 main.go:141] libmachine: Successfully made call to close driver server
I0318 22:51:29.946151   22261 main.go:141] libmachine: Making call to close connection to plugin binary
I0318 22:51:29.946160   22261 main.go:141] libmachine: Making call to close driver server
I0318 22:51:29.946168   22261 main.go:141] libmachine: (functional-522698) Calling .Close
I0318 22:51:29.946421   22261 main.go:141] libmachine: (functional-522698) DBG | Closing plugin on server side
I0318 22:51:29.946429   22261 main.go:141] libmachine: Successfully made call to close driver server
I0318 22:51:29.946460   22261 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-522698 image ls --format json --alsologtostderr:
[{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392","repoDigests":["registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"],"repoTags":["registry.k8s.io/kube-proxy:v1.29.3"],"size":"28398741"},{"id":"sha256:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a"],"repoTags":["registry.k8s.io/kube-scheduler:v1.29.3"],"size":"18553260"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-522698"],"size":"10823156"},{"id":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e
48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"18182961"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"57236178"},{"id":"sha256:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533","repoDigests":["registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.29.3"],"size":"35100536"},{"id":"s
ha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"27755257"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476d
c93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:432c1d6ef432425f7464346f3cbe2771d93e08120df6a40852b09cf22006615c","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-522698"],"size":"1007"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.29.3"],"size":"33466661"},{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a2109
7"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-522698 image ls --format json --alsologtostderr:
I0318 22:51:29.513173   22195 out.go:291] Setting OutFile to fd 1 ...
I0318 22:51:29.513319   22195 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 22:51:29.513329   22195 out.go:304] Setting ErrFile to fd 2...
I0318 22:51:29.513333   22195 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 22:51:29.513523   22195 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17786-6465/.minikube/bin
I0318 22:51:29.514124   22195 config.go:182] Loaded profile config "functional-522698": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0318 22:51:29.514233   22195 config.go:182] Loaded profile config "functional-522698": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0318 22:51:29.514639   22195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0318 22:51:29.514682   22195 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 22:51:29.529997   22195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43661
I0318 22:51:29.530370   22195 main.go:141] libmachine: () Calling .GetVersion
I0318 22:51:29.530863   22195 main.go:141] libmachine: Using API Version  1
I0318 22:51:29.530882   22195 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 22:51:29.531206   22195 main.go:141] libmachine: () Calling .GetMachineName
I0318 22:51:29.531383   22195 main.go:141] libmachine: (functional-522698) Calling .GetState
I0318 22:51:29.533361   22195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0318 22:51:29.533404   22195 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 22:51:29.550289   22195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45983
I0318 22:51:29.550637   22195 main.go:141] libmachine: () Calling .GetVersion
I0318 22:51:29.551140   22195 main.go:141] libmachine: Using API Version  1
I0318 22:51:29.551157   22195 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 22:51:29.551543   22195 main.go:141] libmachine: () Calling .GetMachineName
I0318 22:51:29.551772   22195 main.go:141] libmachine: (functional-522698) Calling .DriverName
I0318 22:51:29.551936   22195 ssh_runner.go:195] Run: systemctl --version
I0318 22:51:29.551961   22195 main.go:141] libmachine: (functional-522698) Calling .GetSSHHostname
I0318 22:51:29.554829   22195 main.go:141] libmachine: (functional-522698) DBG | domain functional-522698 has defined MAC address 52:54:00:64:38:f0 in network mk-functional-522698
I0318 22:51:29.555147   22195 main.go:141] libmachine: (functional-522698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:38:f0", ip: ""} in network mk-functional-522698: {Iface:virbr1 ExpiryTime:2024-03-18 23:47:46 +0000 UTC Type:0 Mac:52:54:00:64:38:f0 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:functional-522698 Clientid:01:52:54:00:64:38:f0}
I0318 22:51:29.555176   22195 main.go:141] libmachine: (functional-522698) DBG | domain functional-522698 has defined IP address 192.168.39.12 and MAC address 52:54:00:64:38:f0 in network mk-functional-522698
I0318 22:51:29.555382   22195 main.go:141] libmachine: (functional-522698) Calling .GetSSHPort
I0318 22:51:29.555542   22195 main.go:141] libmachine: (functional-522698) Calling .GetSSHKeyPath
I0318 22:51:29.555691   22195 main.go:141] libmachine: (functional-522698) Calling .GetSSHUsername
I0318 22:51:29.555825   22195 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17786-6465/.minikube/machines/functional-522698/id_rsa Username:docker}
I0318 22:51:29.649830   22195 ssh_runner.go:195] Run: sudo crictl images --output json
I0318 22:51:29.704237   22195 main.go:141] libmachine: Making call to close driver server
I0318 22:51:29.704248   22195 main.go:141] libmachine: (functional-522698) Calling .Close
I0318 22:51:29.704564   22195 main.go:141] libmachine: Successfully made call to close driver server
I0318 22:51:29.704597   22195 main.go:141] libmachine: Making call to close connection to plugin binary
I0318 22:51:29.704605   22195 main.go:141] libmachine: Making call to close driver server
I0318 22:51:29.704612   22195 main.go:141] libmachine: (functional-522698) Calling .Close
I0318 22:51:29.705952   22195 main.go:141] libmachine: (functional-522698) DBG | Closing plugin on server side
I0318 22:51:29.705974   22195 main.go:141] libmachine: Successfully made call to close driver server
I0318 22:51:29.706011   22195 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-522698 image ls --format yaml --alsologtostderr:
- id: sha256:432c1d6ef432425f7464346f3cbe2771d93e08120df6a40852b09cf22006615c
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-522698
size: "1007"
- id: sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "57236178"
- id: sha256:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c
repoTags:
- registry.k8s.io/kube-apiserver:v1.29.3
size: "35100536"
- id: sha256:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104
repoTags:
- registry.k8s.io/kube-controller-manager:v1.29.3
size: "33466661"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-522698
size: "10823156"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "18182961"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "27755257"
- id: sha256:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392
repoDigests:
- registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863
repoTags:
- registry.k8s.io/kube-proxy:v1.29.3
size: "28398741"
- id: sha256:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a
repoTags:
- registry.k8s.io/kube-scheduler:v1.29.3
size: "18553260"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-522698 image ls --format yaml --alsologtostderr:
I0318 22:51:29.269226   22149 out.go:291] Setting OutFile to fd 1 ...
I0318 22:51:29.269360   22149 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 22:51:29.269368   22149 out.go:304] Setting ErrFile to fd 2...
I0318 22:51:29.269375   22149 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 22:51:29.269653   22149 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17786-6465/.minikube/bin
I0318 22:51:29.270360   22149 config.go:182] Loaded profile config "functional-522698": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0318 22:51:29.270514   22149 config.go:182] Loaded profile config "functional-522698": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0318 22:51:29.271071   22149 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0318 22:51:29.271122   22149 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 22:51:29.285411   22149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46617
I0318 22:51:29.285826   22149 main.go:141] libmachine: () Calling .GetVersion
I0318 22:51:29.286443   22149 main.go:141] libmachine: Using API Version  1
I0318 22:51:29.286472   22149 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 22:51:29.286798   22149 main.go:141] libmachine: () Calling .GetMachineName
I0318 22:51:29.286984   22149 main.go:141] libmachine: (functional-522698) Calling .GetState
I0318 22:51:29.288685   22149 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0318 22:51:29.288720   22149 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 22:51:29.301668   22149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39381
I0318 22:51:29.301990   22149 main.go:141] libmachine: () Calling .GetVersion
I0318 22:51:29.302426   22149 main.go:141] libmachine: Using API Version  1
I0318 22:51:29.302449   22149 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 22:51:29.302794   22149 main.go:141] libmachine: () Calling .GetMachineName
I0318 22:51:29.302942   22149 main.go:141] libmachine: (functional-522698) Calling .DriverName
I0318 22:51:29.303104   22149 ssh_runner.go:195] Run: systemctl --version
I0318 22:51:29.303136   22149 main.go:141] libmachine: (functional-522698) Calling .GetSSHHostname
I0318 22:51:29.305940   22149 main.go:141] libmachine: (functional-522698) DBG | domain functional-522698 has defined MAC address 52:54:00:64:38:f0 in network mk-functional-522698
I0318 22:51:29.306290   22149 main.go:141] libmachine: (functional-522698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:38:f0", ip: ""} in network mk-functional-522698: {Iface:virbr1 ExpiryTime:2024-03-18 23:47:46 +0000 UTC Type:0 Mac:52:54:00:64:38:f0 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:functional-522698 Clientid:01:52:54:00:64:38:f0}
I0318 22:51:29.306365   22149 main.go:141] libmachine: (functional-522698) DBG | domain functional-522698 has defined IP address 192.168.39.12 and MAC address 52:54:00:64:38:f0 in network mk-functional-522698
I0318 22:51:29.306683   22149 main.go:141] libmachine: (functional-522698) Calling .GetSSHPort
I0318 22:51:29.306833   22149 main.go:141] libmachine: (functional-522698) Calling .GetSSHKeyPath
I0318 22:51:29.306964   22149 main.go:141] libmachine: (functional-522698) Calling .GetSSHUsername
I0318 22:51:29.307093   22149 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17786-6465/.minikube/machines/functional-522698/id_rsa Username:docker}
I0318 22:51:29.396541   22149 ssh_runner.go:195] Run: sudo crictl images --output json
I0318 22:51:29.447114   22149 main.go:141] libmachine: Making call to close driver server
I0318 22:51:29.447130   22149 main.go:141] libmachine: (functional-522698) Calling .Close
I0318 22:51:29.447404   22149 main.go:141] libmachine: (functional-522698) DBG | Closing plugin on server side
I0318 22:51:29.447461   22149 main.go:141] libmachine: Successfully made call to close driver server
I0318 22:51:29.447490   22149 main.go:141] libmachine: Making call to close connection to plugin binary
I0318 22:51:29.447505   22149 main.go:141] libmachine: Making call to close driver server
I0318 22:51:29.447517   22149 main.go:141] libmachine: (functional-522698) Calling .Close
I0318 22:51:29.447731   22149 main.go:141] libmachine: Successfully made call to close driver server
I0318 22:51:29.447752   22149 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-522698 ssh pgrep buildkitd: exit status 1 (205.154588ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 image build -t localhost/my-image:functional-522698 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-522698 image build -t localhost/my-image:functional-522698 testdata/build --alsologtostderr: (5.354063303s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-522698 image build -t localhost/my-image:functional-522698 testdata/build --alsologtostderr:
I0318 22:51:29.721928   22249 out.go:291] Setting OutFile to fd 1 ...
I0318 22:51:29.722372   22249 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 22:51:29.722407   22249 out.go:304] Setting ErrFile to fd 2...
I0318 22:51:29.722424   22249 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 22:51:29.722876   22249 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17786-6465/.minikube/bin
I0318 22:51:29.723881   22249 config.go:182] Loaded profile config "functional-522698": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0318 22:51:29.724635   22249 config.go:182] Loaded profile config "functional-522698": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0318 22:51:29.725197   22249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0318 22:51:29.725251   22249 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 22:51:29.740312   22249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33737
I0318 22:51:29.740731   22249 main.go:141] libmachine: () Calling .GetVersion
I0318 22:51:29.741300   22249 main.go:141] libmachine: Using API Version  1
I0318 22:51:29.741323   22249 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 22:51:29.741621   22249 main.go:141] libmachine: () Calling .GetMachineName
I0318 22:51:29.741794   22249 main.go:141] libmachine: (functional-522698) Calling .GetState
I0318 22:51:29.744554   22249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0318 22:51:29.744599   22249 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 22:51:29.762225   22249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45121
I0318 22:51:29.762542   22249 main.go:141] libmachine: () Calling .GetVersion
I0318 22:51:29.763045   22249 main.go:141] libmachine: Using API Version  1
I0318 22:51:29.763064   22249 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 22:51:29.763369   22249 main.go:141] libmachine: () Calling .GetMachineName
I0318 22:51:29.763575   22249 main.go:141] libmachine: (functional-522698) Calling .DriverName
I0318 22:51:29.763788   22249 ssh_runner.go:195] Run: systemctl --version
I0318 22:51:29.763817   22249 main.go:141] libmachine: (functional-522698) Calling .GetSSHHostname
I0318 22:51:29.766319   22249 main.go:141] libmachine: (functional-522698) DBG | domain functional-522698 has defined MAC address 52:54:00:64:38:f0 in network mk-functional-522698
I0318 22:51:29.766740   22249 main.go:141] libmachine: (functional-522698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:38:f0", ip: ""} in network mk-functional-522698: {Iface:virbr1 ExpiryTime:2024-03-18 23:47:46 +0000 UTC Type:0 Mac:52:54:00:64:38:f0 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:functional-522698 Clientid:01:52:54:00:64:38:f0}
I0318 22:51:29.766771   22249 main.go:141] libmachine: (functional-522698) DBG | domain functional-522698 has defined IP address 192.168.39.12 and MAC address 52:54:00:64:38:f0 in network mk-functional-522698
I0318 22:51:29.766898   22249 main.go:141] libmachine: (functional-522698) Calling .GetSSHPort
I0318 22:51:29.767051   22249 main.go:141] libmachine: (functional-522698) Calling .GetSSHKeyPath
I0318 22:51:29.767209   22249 main.go:141] libmachine: (functional-522698) Calling .GetSSHUsername
I0318 22:51:29.767332   22249 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17786-6465/.minikube/machines/functional-522698/id_rsa Username:docker}
I0318 22:51:29.855764   22249 build_images.go:161] Building image from path: /tmp/build.1861854830.tar
I0318 22:51:29.855815   22249 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0318 22:51:29.876007   22249 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1861854830.tar
I0318 22:51:29.881945   22249 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1861854830.tar: stat -c "%s %y" /var/lib/minikube/build/build.1861854830.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1861854830.tar': No such file or directory
I0318 22:51:29.881969   22249 ssh_runner.go:362] scp /tmp/build.1861854830.tar --> /var/lib/minikube/build/build.1861854830.tar (3072 bytes)
I0318 22:51:29.926165   22249 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1861854830
I0318 22:51:29.949572   22249 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1861854830 -xf /var/lib/minikube/build/build.1861854830.tar
I0318 22:51:29.961874   22249 containerd.go:394] Building image: /var/lib/minikube/build/build.1861854830
I0318 22:51:29.961926   22249 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1861854830 --local dockerfile=/var/lib/minikube/build/build.1861854830 --output type=image,name=localhost/my-image:functional-522698
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 2.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.4s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 1.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.3s done
#8 exporting manifest sha256:dc443694b8be6112f41d5fb3fc5fcf82fcf8984bf4c4dd42ab80612f7db42cbe
#8 exporting manifest sha256:dc443694b8be6112f41d5fb3fc5fcf82fcf8984bf4c4dd42ab80612f7db42cbe 0.0s done
#8 exporting config sha256:0f81ec9b99c9cebf6ba5f24135706075040a76ef660c7be1d3a0a7c372633e2d 0.0s done
#8 naming to localhost/my-image:functional-522698 0.0s done
#8 DONE 0.3s
I0318 22:51:34.987788   22249 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1861854830 --local dockerfile=/var/lib/minikube/build/build.1861854830 --output type=image,name=localhost/my-image:functional-522698: (5.025821615s)
I0318 22:51:34.987898   22249 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1861854830
I0318 22:51:35.004928   22249 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1861854830.tar
I0318 22:51:35.017185   22249 build_images.go:217] Built localhost/my-image:functional-522698 from /tmp/build.1861854830.tar
I0318 22:51:35.017218   22249 build_images.go:133] succeeded building to: functional-522698
I0318 22:51:35.017225   22249 build_images.go:134] failed building to: 
I0318 22:51:35.017252   22249 main.go:141] libmachine: Making call to close driver server
I0318 22:51:35.017278   22249 main.go:141] libmachine: (functional-522698) Calling .Close
I0318 22:51:35.017539   22249 main.go:141] libmachine: Successfully made call to close driver server
I0318 22:51:35.017554   22249 main.go:141] libmachine: Making call to close connection to plugin binary
I0318 22:51:35.017562   22249 main.go:141] libmachine: Making call to close driver server
I0318 22:51:35.017570   22249 main.go:141] libmachine: (functional-522698) Calling .Close
I0318 22:51:35.017788   22249 main.go:141] libmachine: (functional-522698) DBG | Closing plugin on server side
I0318 22:51:35.017824   22249 main.go:141] libmachine: Successfully made call to close driver server
I0318 22:51:35.017832   22249 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.602858571s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-522698
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.62s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 image load --daemon gcr.io/google-containers/addon-resizer:functional-522698 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-522698 image load --daemon gcr.io/google-containers/addon-resizer:functional-522698 --alsologtostderr: (4.743676639s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 image load --daemon gcr.io/google-containers/addon-resizer:functional-522698 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-522698 image load --daemon gcr.io/google-containers/addon-resizer:functional-522698 --alsologtostderr: (2.743050906s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 image ls
E0318 22:51:17.849168   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.725412972s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-522698
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 image load --daemon gcr.io/google-containers/addon-resizer:functional-522698 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-522698 image load --daemon gcr.io/google-containers/addon-resizer:functional-522698 --alsologtostderr: (3.937641914s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 image save gcr.io/google-containers/addon-resizer:functional-522698 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-522698 image save gcr.io/google-containers/addon-resizer:functional-522698 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.070771669s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 image rm gcr.io/google-containers/addon-resizer:functional-522698 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-522698 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.317931316s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-522698
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-522698 image save --daemon gcr.io/google-containers/addon-resizer:functional-522698 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-522698 image save --daemon gcr.io/google-containers/addon-resizer:functional-522698 --alsologtostderr: (1.101032204s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-522698
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.13s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-522698
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-522698
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-522698
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (284s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-418460 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0318 22:53:34.004458   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.crt: no such file or directory
E0318 22:54:01.692646   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.crt: no such file or directory
E0318 22:55:54.742676   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/functional-522698/client.crt: no such file or directory
E0318 22:55:54.747961   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/functional-522698/client.crt: no such file or directory
E0318 22:55:54.758200   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/functional-522698/client.crt: no such file or directory
E0318 22:55:54.778477   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/functional-522698/client.crt: no such file or directory
E0318 22:55:54.818726   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/functional-522698/client.crt: no such file or directory
E0318 22:55:54.899043   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/functional-522698/client.crt: no such file or directory
E0318 22:55:55.059452   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/functional-522698/client.crt: no such file or directory
E0318 22:55:55.380072   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/functional-522698/client.crt: no such file or directory
E0318 22:55:56.020930   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/functional-522698/client.crt: no such file or directory
E0318 22:55:57.301196   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/functional-522698/client.crt: no such file or directory
E0318 22:55:59.862753   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/functional-522698/client.crt: no such file or directory
E0318 22:56:04.982935   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/functional-522698/client.crt: no such file or directory
E0318 22:56:15.224060   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/functional-522698/client.crt: no such file or directory
E0318 22:56:35.704579   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/functional-522698/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-418460 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (4m43.315621933s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (284.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-418460 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-418460 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-418460 -- rollout status deployment/busybox: (4.589595701s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-418460 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-418460 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-418460 -- exec busybox-7fdf7869d9-68xwx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-418460 -- exec busybox-7fdf7869d9-dppcr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-418460 -- exec busybox-7fdf7869d9-r87pk -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-418460 -- exec busybox-7fdf7869d9-68xwx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-418460 -- exec busybox-7fdf7869d9-dppcr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-418460 -- exec busybox-7fdf7869d9-r87pk -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-418460 -- exec busybox-7fdf7869d9-68xwx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-418460 -- exec busybox-7fdf7869d9-dppcr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-418460 -- exec busybox-7fdf7869d9-r87pk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-418460 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-418460 -- exec busybox-7fdf7869d9-68xwx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-418460 -- exec busybox-7fdf7869d9-68xwx -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-418460 -- exec busybox-7fdf7869d9-dppcr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-418460 -- exec busybox-7fdf7869d9-dppcr -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-418460 -- exec busybox-7fdf7869d9-r87pk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-418460 -- exec busybox-7fdf7869d9-r87pk -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (48.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-418460 -v=7 --alsologtostderr
E0318 22:57:16.666253   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/functional-522698/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-418460 -v=7 --alsologtostderr: (47.291618746s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (48.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-418460 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 cp testdata/cp-test.txt ha-418460:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 ssh -n ha-418460 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 cp ha-418460:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3094021677/001/cp-test_ha-418460.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 ssh -n ha-418460 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 cp ha-418460:/home/docker/cp-test.txt ha-418460-m02:/home/docker/cp-test_ha-418460_ha-418460-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 ssh -n ha-418460 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 ssh -n ha-418460-m02 "sudo cat /home/docker/cp-test_ha-418460_ha-418460-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 cp ha-418460:/home/docker/cp-test.txt ha-418460-m03:/home/docker/cp-test_ha-418460_ha-418460-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 ssh -n ha-418460 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 ssh -n ha-418460-m03 "sudo cat /home/docker/cp-test_ha-418460_ha-418460-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 cp ha-418460:/home/docker/cp-test.txt ha-418460-m04:/home/docker/cp-test_ha-418460_ha-418460-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 ssh -n ha-418460 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 ssh -n ha-418460-m04 "sudo cat /home/docker/cp-test_ha-418460_ha-418460-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 cp testdata/cp-test.txt ha-418460-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 ssh -n ha-418460-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 cp ha-418460-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3094021677/001/cp-test_ha-418460-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 ssh -n ha-418460-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 cp ha-418460-m02:/home/docker/cp-test.txt ha-418460:/home/docker/cp-test_ha-418460-m02_ha-418460.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 ssh -n ha-418460-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 ssh -n ha-418460 "sudo cat /home/docker/cp-test_ha-418460-m02_ha-418460.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 cp ha-418460-m02:/home/docker/cp-test.txt ha-418460-m03:/home/docker/cp-test_ha-418460-m02_ha-418460-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 ssh -n ha-418460-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 ssh -n ha-418460-m03 "sudo cat /home/docker/cp-test_ha-418460-m02_ha-418460-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 cp ha-418460-m02:/home/docker/cp-test.txt ha-418460-m04:/home/docker/cp-test_ha-418460-m02_ha-418460-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 ssh -n ha-418460-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 ssh -n ha-418460-m04 "sudo cat /home/docker/cp-test_ha-418460-m02_ha-418460-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 cp testdata/cp-test.txt ha-418460-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 ssh -n ha-418460-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 cp ha-418460-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3094021677/001/cp-test_ha-418460-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 ssh -n ha-418460-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 cp ha-418460-m03:/home/docker/cp-test.txt ha-418460:/home/docker/cp-test_ha-418460-m03_ha-418460.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 ssh -n ha-418460-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 ssh -n ha-418460 "sudo cat /home/docker/cp-test_ha-418460-m03_ha-418460.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 cp ha-418460-m03:/home/docker/cp-test.txt ha-418460-m02:/home/docker/cp-test_ha-418460-m03_ha-418460-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 ssh -n ha-418460-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 ssh -n ha-418460-m02 "sudo cat /home/docker/cp-test_ha-418460-m03_ha-418460-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 cp ha-418460-m03:/home/docker/cp-test.txt ha-418460-m04:/home/docker/cp-test_ha-418460-m03_ha-418460-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 ssh -n ha-418460-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 ssh -n ha-418460-m04 "sudo cat /home/docker/cp-test_ha-418460-m03_ha-418460-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 cp testdata/cp-test.txt ha-418460-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 ssh -n ha-418460-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 cp ha-418460-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3094021677/001/cp-test_ha-418460-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 ssh -n ha-418460-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 cp ha-418460-m04:/home/docker/cp-test.txt ha-418460:/home/docker/cp-test_ha-418460-m04_ha-418460.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 ssh -n ha-418460-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 ssh -n ha-418460 "sudo cat /home/docker/cp-test_ha-418460-m04_ha-418460.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 cp ha-418460-m04:/home/docker/cp-test.txt ha-418460-m02:/home/docker/cp-test_ha-418460-m04_ha-418460-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 ssh -n ha-418460-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 ssh -n ha-418460-m02 "sudo cat /home/docker/cp-test_ha-418460-m04_ha-418460-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 cp ha-418460-m04:/home/docker/cp-test.txt ha-418460-m03:/home/docker/cp-test_ha-418460-m04_ha-418460-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 ssh -n ha-418460-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 ssh -n ha-418460-m03 "sudo cat /home/docker/cp-test_ha-418460-m04_ha-418460-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (92.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 node stop m02 -v=7 --alsologtostderr
E0318 22:58:34.004525   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.crt: no such file or directory
E0318 22:58:38.586851   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/functional-522698/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-418460 node stop m02 -v=7 --alsologtostderr: (1m31.731346348s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-418460 status -v=7 --alsologtostderr: exit status 7 (659.750046ms)

                                                
                                                
-- stdout --
	ha-418460
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-418460-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-418460-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-418460-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 22:59:30.140283   26822 out.go:291] Setting OutFile to fd 1 ...
	I0318 22:59:30.140563   26822 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 22:59:30.140574   26822 out.go:304] Setting ErrFile to fd 2...
	I0318 22:59:30.140578   26822 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 22:59:30.140758   26822 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17786-6465/.minikube/bin
	I0318 22:59:30.140909   26822 out.go:298] Setting JSON to false
	I0318 22:59:30.140933   26822 mustload.go:65] Loading cluster: ha-418460
	I0318 22:59:30.140993   26822 notify.go:220] Checking for updates...
	I0318 22:59:30.141312   26822 config.go:182] Loaded profile config "ha-418460": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0318 22:59:30.141329   26822 status.go:255] checking status of ha-418460 ...
	I0318 22:59:30.141797   26822 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:59:30.141864   26822 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:59:30.161036   26822 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41387
	I0318 22:59:30.161461   26822 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:59:30.161957   26822 main.go:141] libmachine: Using API Version  1
	I0318 22:59:30.161976   26822 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:59:30.162376   26822 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:59:30.162563   26822 main.go:141] libmachine: (ha-418460) Calling .GetState
	I0318 22:59:30.164184   26822 status.go:330] ha-418460 host status = "Running" (err=<nil>)
	I0318 22:59:30.164198   26822 host.go:66] Checking if "ha-418460" exists ...
	I0318 22:59:30.164547   26822 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:59:30.164583   26822 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:59:30.179287   26822 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41161
	I0318 22:59:30.179639   26822 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:59:30.180058   26822 main.go:141] libmachine: Using API Version  1
	I0318 22:59:30.180075   26822 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:59:30.180366   26822 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:59:30.180562   26822 main.go:141] libmachine: (ha-418460) Calling .GetIP
	I0318 22:59:30.183025   26822 main.go:141] libmachine: (ha-418460) DBG | domain ha-418460 has defined MAC address 52:54:00:f8:9e:e0 in network mk-ha-418460
	I0318 22:59:30.183391   26822 main.go:141] libmachine: (ha-418460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:9e:e0", ip: ""} in network mk-ha-418460: {Iface:virbr1 ExpiryTime:2024-03-18 23:52:19 +0000 UTC Type:0 Mac:52:54:00:f8:9e:e0 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:ha-418460 Clientid:01:52:54:00:f8:9e:e0}
	I0318 22:59:30.183423   26822 main.go:141] libmachine: (ha-418460) DBG | domain ha-418460 has defined IP address 192.168.39.242 and MAC address 52:54:00:f8:9e:e0 in network mk-ha-418460
	I0318 22:59:30.183519   26822 host.go:66] Checking if "ha-418460" exists ...
	I0318 22:59:30.183776   26822 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:59:30.183815   26822 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:59:30.196991   26822 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37717
	I0318 22:59:30.197368   26822 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:59:30.197841   26822 main.go:141] libmachine: Using API Version  1
	I0318 22:59:30.197866   26822 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:59:30.198111   26822 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:59:30.198271   26822 main.go:141] libmachine: (ha-418460) Calling .DriverName
	I0318 22:59:30.198429   26822 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 22:59:30.198455   26822 main.go:141] libmachine: (ha-418460) Calling .GetSSHHostname
	I0318 22:59:30.200871   26822 main.go:141] libmachine: (ha-418460) DBG | domain ha-418460 has defined MAC address 52:54:00:f8:9e:e0 in network mk-ha-418460
	I0318 22:59:30.201298   26822 main.go:141] libmachine: (ha-418460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:9e:e0", ip: ""} in network mk-ha-418460: {Iface:virbr1 ExpiryTime:2024-03-18 23:52:19 +0000 UTC Type:0 Mac:52:54:00:f8:9e:e0 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:ha-418460 Clientid:01:52:54:00:f8:9e:e0}
	I0318 22:59:30.201334   26822 main.go:141] libmachine: (ha-418460) DBG | domain ha-418460 has defined IP address 192.168.39.242 and MAC address 52:54:00:f8:9e:e0 in network mk-ha-418460
	I0318 22:59:30.201477   26822 main.go:141] libmachine: (ha-418460) Calling .GetSSHPort
	I0318 22:59:30.201628   26822 main.go:141] libmachine: (ha-418460) Calling .GetSSHKeyPath
	I0318 22:59:30.201784   26822 main.go:141] libmachine: (ha-418460) Calling .GetSSHUsername
	I0318 22:59:30.201929   26822 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17786-6465/.minikube/machines/ha-418460/id_rsa Username:docker}
	I0318 22:59:30.297458   26822 ssh_runner.go:195] Run: systemctl --version
	I0318 22:59:30.308431   26822 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:59:30.329275   26822 kubeconfig.go:125] found "ha-418460" server: "https://192.168.39.254:8443"
	I0318 22:59:30.329297   26822 api_server.go:166] Checking apiserver status ...
	I0318 22:59:30.329329   26822 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:59:30.344916   26822 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1140/cgroup
	W0318 22:59:30.355569   26822 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1140/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 22:59:30.355615   26822 ssh_runner.go:195] Run: ls
	I0318 22:59:30.362102   26822 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 22:59:30.366971   26822 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 22:59:30.366991   26822 status.go:422] ha-418460 apiserver status = Running (err=<nil>)
	I0318 22:59:30.367001   26822 status.go:257] ha-418460 status: &{Name:ha-418460 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 22:59:30.367020   26822 status.go:255] checking status of ha-418460-m02 ...
	I0318 22:59:30.367277   26822 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:59:30.367314   26822 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:59:30.381603   26822 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39931
	I0318 22:59:30.381938   26822 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:59:30.382366   26822 main.go:141] libmachine: Using API Version  1
	I0318 22:59:30.382382   26822 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:59:30.382704   26822 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:59:30.382890   26822 main.go:141] libmachine: (ha-418460-m02) Calling .GetState
	I0318 22:59:30.384363   26822 status.go:330] ha-418460-m02 host status = "Stopped" (err=<nil>)
	I0318 22:59:30.384377   26822 status.go:343] host is not running, skipping remaining checks
	I0318 22:59:30.384399   26822 status.go:257] ha-418460-m02 status: &{Name:ha-418460-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 22:59:30.384418   26822 status.go:255] checking status of ha-418460-m03 ...
	I0318 22:59:30.384684   26822 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:59:30.384714   26822 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:59:30.399033   26822 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37093
	I0318 22:59:30.399354   26822 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:59:30.399756   26822 main.go:141] libmachine: Using API Version  1
	I0318 22:59:30.399776   26822 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:59:30.400097   26822 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:59:30.400247   26822 main.go:141] libmachine: (ha-418460-m03) Calling .GetState
	I0318 22:59:30.401616   26822 status.go:330] ha-418460-m03 host status = "Running" (err=<nil>)
	I0318 22:59:30.401632   26822 host.go:66] Checking if "ha-418460-m03" exists ...
	I0318 22:59:30.401991   26822 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:59:30.402028   26822 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:59:30.415829   26822 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43221
	I0318 22:59:30.416190   26822 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:59:30.416632   26822 main.go:141] libmachine: Using API Version  1
	I0318 22:59:30.416651   26822 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:59:30.416929   26822 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:59:30.417100   26822 main.go:141] libmachine: (ha-418460-m03) Calling .GetIP
	I0318 22:59:30.419421   26822 main.go:141] libmachine: (ha-418460-m03) DBG | domain ha-418460-m03 has defined MAC address 52:54:00:0d:37:c5 in network mk-ha-418460
	I0318 22:59:30.419789   26822 main.go:141] libmachine: (ha-418460-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:37:c5", ip: ""} in network mk-ha-418460: {Iface:virbr1 ExpiryTime:2024-03-18 23:55:54 +0000 UTC Type:0 Mac:52:54:00:0d:37:c5 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-418460-m03 Clientid:01:52:54:00:0d:37:c5}
	I0318 22:59:30.419814   26822 main.go:141] libmachine: (ha-418460-m03) DBG | domain ha-418460-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:0d:37:c5 in network mk-ha-418460
	I0318 22:59:30.419930   26822 host.go:66] Checking if "ha-418460-m03" exists ...
	I0318 22:59:30.420198   26822 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:59:30.420229   26822 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:59:30.433480   26822 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45557
	I0318 22:59:30.433778   26822 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:59:30.434245   26822 main.go:141] libmachine: Using API Version  1
	I0318 22:59:30.434266   26822 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:59:30.434562   26822 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:59:30.434732   26822 main.go:141] libmachine: (ha-418460-m03) Calling .DriverName
	I0318 22:59:30.434905   26822 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 22:59:30.434926   26822 main.go:141] libmachine: (ha-418460-m03) Calling .GetSSHHostname
	I0318 22:59:30.437212   26822 main.go:141] libmachine: (ha-418460-m03) DBG | domain ha-418460-m03 has defined MAC address 52:54:00:0d:37:c5 in network mk-ha-418460
	I0318 22:59:30.437575   26822 main.go:141] libmachine: (ha-418460-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:37:c5", ip: ""} in network mk-ha-418460: {Iface:virbr1 ExpiryTime:2024-03-18 23:55:54 +0000 UTC Type:0 Mac:52:54:00:0d:37:c5 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-418460-m03 Clientid:01:52:54:00:0d:37:c5}
	I0318 22:59:30.437601   26822 main.go:141] libmachine: (ha-418460-m03) DBG | domain ha-418460-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:0d:37:c5 in network mk-ha-418460
	I0318 22:59:30.437707   26822 main.go:141] libmachine: (ha-418460-m03) Calling .GetSSHPort
	I0318 22:59:30.437870   26822 main.go:141] libmachine: (ha-418460-m03) Calling .GetSSHKeyPath
	I0318 22:59:30.438019   26822 main.go:141] libmachine: (ha-418460-m03) Calling .GetSSHUsername
	I0318 22:59:30.438149   26822 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17786-6465/.minikube/machines/ha-418460-m03/id_rsa Username:docker}
	I0318 22:59:30.521964   26822 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:59:30.543422   26822 kubeconfig.go:125] found "ha-418460" server: "https://192.168.39.254:8443"
	I0318 22:59:30.543448   26822 api_server.go:166] Checking apiserver status ...
	I0318 22:59:30.543485   26822 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:59:30.558468   26822 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1271/cgroup
	W0318 22:59:30.570456   26822 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1271/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 22:59:30.570503   26822 ssh_runner.go:195] Run: ls
	I0318 22:59:30.575136   26822 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 22:59:30.579285   26822 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 22:59:30.579303   26822 status.go:422] ha-418460-m03 apiserver status = Running (err=<nil>)
	I0318 22:59:30.579311   26822 status.go:257] ha-418460-m03 status: &{Name:ha-418460-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 22:59:30.579324   26822 status.go:255] checking status of ha-418460-m04 ...
	I0318 22:59:30.579576   26822 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:59:30.579617   26822 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:59:30.593802   26822 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40875
	I0318 22:59:30.594181   26822 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:59:30.594668   26822 main.go:141] libmachine: Using API Version  1
	I0318 22:59:30.594687   26822 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:59:30.595013   26822 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:59:30.595185   26822 main.go:141] libmachine: (ha-418460-m04) Calling .GetState
	I0318 22:59:30.596760   26822 status.go:330] ha-418460-m04 host status = "Running" (err=<nil>)
	I0318 22:59:30.596772   26822 host.go:66] Checking if "ha-418460-m04" exists ...
	I0318 22:59:30.597013   26822 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:59:30.597046   26822 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:59:30.611981   26822 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33079
	I0318 22:59:30.612362   26822 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:59:30.612804   26822 main.go:141] libmachine: Using API Version  1
	I0318 22:59:30.612825   26822 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:59:30.613111   26822 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:59:30.613281   26822 main.go:141] libmachine: (ha-418460-m04) Calling .GetIP
	I0318 22:59:30.615789   26822 main.go:141] libmachine: (ha-418460-m04) DBG | domain ha-418460-m04 has defined MAC address 52:54:00:59:4f:26 in network mk-ha-418460
	I0318 22:59:30.616262   26822 main.go:141] libmachine: (ha-418460-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4f:26", ip: ""} in network mk-ha-418460: {Iface:virbr1 ExpiryTime:2024-03-18 23:57:13 +0000 UTC Type:0 Mac:52:54:00:59:4f:26 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-418460-m04 Clientid:01:52:54:00:59:4f:26}
	I0318 22:59:30.616291   26822 main.go:141] libmachine: (ha-418460-m04) DBG | domain ha-418460-m04 has defined IP address 192.168.39.52 and MAC address 52:54:00:59:4f:26 in network mk-ha-418460
	I0318 22:59:30.616427   26822 host.go:66] Checking if "ha-418460-m04" exists ...
	I0318 22:59:30.616697   26822 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 22:59:30.616760   26822 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:59:30.633152   26822 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34877
	I0318 22:59:30.633472   26822 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:59:30.633896   26822 main.go:141] libmachine: Using API Version  1
	I0318 22:59:30.633918   26822 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:59:30.634262   26822 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:59:30.634417   26822 main.go:141] libmachine: (ha-418460-m04) Calling .DriverName
	I0318 22:59:30.634546   26822 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 22:59:30.634564   26822 main.go:141] libmachine: (ha-418460-m04) Calling .GetSSHHostname
	I0318 22:59:30.637168   26822 main.go:141] libmachine: (ha-418460-m04) DBG | domain ha-418460-m04 has defined MAC address 52:54:00:59:4f:26 in network mk-ha-418460
	I0318 22:59:30.637541   26822 main.go:141] libmachine: (ha-418460-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4f:26", ip: ""} in network mk-ha-418460: {Iface:virbr1 ExpiryTime:2024-03-18 23:57:13 +0000 UTC Type:0 Mac:52:54:00:59:4f:26 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-418460-m04 Clientid:01:52:54:00:59:4f:26}
	I0318 22:59:30.637581   26822 main.go:141] libmachine: (ha-418460-m04) DBG | domain ha-418460-m04 has defined IP address 192.168.39.52 and MAC address 52:54:00:59:4f:26 in network mk-ha-418460
	I0318 22:59:30.637738   26822 main.go:141] libmachine: (ha-418460-m04) Calling .GetSSHPort
	I0318 22:59:30.637877   26822 main.go:141] libmachine: (ha-418460-m04) Calling .GetSSHKeyPath
	I0318 22:59:30.638004   26822 main.go:141] libmachine: (ha-418460-m04) Calling .GetSSHUsername
	I0318 22:59:30.638141   26822 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17786-6465/.minikube/machines/ha-418460-m04/id_rsa Username:docker}
	I0318 22:59:30.726020   26822 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:59:30.745667   26822 status.go:257] ha-418460-m04 status: &{Name:ha-418460-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (92.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (45.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-418460 node start m02 -v=7 --alsologtostderr: (44.126009515s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (45.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (487.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-418460 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-418460 -v=7 --alsologtostderr
E0318 23:00:54.742879   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/functional-522698/client.crt: no such file or directory
E0318 23:01:22.427167   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/functional-522698/client.crt: no such file or directory
E0318 23:03:34.004538   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-418460 -v=7 --alsologtostderr: (4m38.610624657s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-418460 --wait=true -v=7 --alsologtostderr
E0318 23:04:57.053665   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.crt: no such file or directory
E0318 23:05:54.742962   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/functional-522698/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-418460 --wait=true -v=7 --alsologtostderr: (3m29.167697154s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-418460
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (487.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (8.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-418460 node delete m03 -v=7 --alsologtostderr: (7.319800495s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (8.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (275.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 stop -v=7 --alsologtostderr
E0318 23:08:34.004540   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.crt: no such file or directory
E0318 23:10:54.743142   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/functional-522698/client.crt: no such file or directory
E0318 23:12:17.787384   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/functional-522698/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-418460 stop -v=7 --alsologtostderr: (4m35.642770283s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-418460 status -v=7 --alsologtostderr: exit status 7 (117.418668ms)

                                                
                                                
-- stdout --
	ha-418460
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-418460-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-418460-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 23:13:08.734479   29928 out.go:291] Setting OutFile to fd 1 ...
	I0318 23:13:08.734752   29928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 23:13:08.734762   29928 out.go:304] Setting ErrFile to fd 2...
	I0318 23:13:08.734767   29928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 23:13:08.734925   29928 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17786-6465/.minikube/bin
	I0318 23:13:08.735074   29928 out.go:298] Setting JSON to false
	I0318 23:13:08.735098   29928 mustload.go:65] Loading cluster: ha-418460
	I0318 23:13:08.735225   29928 notify.go:220] Checking for updates...
	I0318 23:13:08.735584   29928 config.go:182] Loaded profile config "ha-418460": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0318 23:13:08.735605   29928 status.go:255] checking status of ha-418460 ...
	I0318 23:13:08.736071   29928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 23:13:08.736127   29928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 23:13:08.757756   29928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45409
	I0318 23:13:08.758119   29928 main.go:141] libmachine: () Calling .GetVersion
	I0318 23:13:08.758636   29928 main.go:141] libmachine: Using API Version  1
	I0318 23:13:08.758658   29928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 23:13:08.759033   29928 main.go:141] libmachine: () Calling .GetMachineName
	I0318 23:13:08.759199   29928 main.go:141] libmachine: (ha-418460) Calling .GetState
	I0318 23:13:08.760770   29928 status.go:330] ha-418460 host status = "Stopped" (err=<nil>)
	I0318 23:13:08.760783   29928 status.go:343] host is not running, skipping remaining checks
	I0318 23:13:08.760790   29928 status.go:257] ha-418460 status: &{Name:ha-418460 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 23:13:08.760811   29928 status.go:255] checking status of ha-418460-m02 ...
	I0318 23:13:08.761069   29928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 23:13:08.761108   29928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 23:13:08.774893   29928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45853
	I0318 23:13:08.775314   29928 main.go:141] libmachine: () Calling .GetVersion
	I0318 23:13:08.775745   29928 main.go:141] libmachine: Using API Version  1
	I0318 23:13:08.775772   29928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 23:13:08.776115   29928 main.go:141] libmachine: () Calling .GetMachineName
	I0318 23:13:08.776281   29928 main.go:141] libmachine: (ha-418460-m02) Calling .GetState
	I0318 23:13:08.777815   29928 status.go:330] ha-418460-m02 host status = "Stopped" (err=<nil>)
	I0318 23:13:08.777831   29928 status.go:343] host is not running, skipping remaining checks
	I0318 23:13:08.777841   29928 status.go:257] ha-418460-m02 status: &{Name:ha-418460-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 23:13:08.777860   29928 status.go:255] checking status of ha-418460-m04 ...
	I0318 23:13:08.778114   29928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 23:13:08.778151   29928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 23:13:08.791313   29928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43565
	I0318 23:13:08.791658   29928 main.go:141] libmachine: () Calling .GetVersion
	I0318 23:13:08.792054   29928 main.go:141] libmachine: Using API Version  1
	I0318 23:13:08.792066   29928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 23:13:08.792312   29928 main.go:141] libmachine: () Calling .GetMachineName
	I0318 23:13:08.792480   29928 main.go:141] libmachine: (ha-418460-m04) Calling .GetState
	I0318 23:13:08.793818   29928 status.go:330] ha-418460-m04 host status = "Stopped" (err=<nil>)
	I0318 23:13:08.793833   29928 status.go:343] host is not running, skipping remaining checks
	I0318 23:13:08.793840   29928 status.go:257] ha-418460-m04 status: &{Name:ha-418460-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (275.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (159.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-418460 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0318 23:13:34.005097   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-418460 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m38.260286446s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (159.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (74.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-418460 --control-plane -v=7 --alsologtostderr
E0318 23:15:54.743246   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/functional-522698/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-418460 --control-plane -v=7 --alsologtostderr: (1m13.575515178s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-418460 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (74.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.57s)

                                                
                                    
x
+
TestJSONOutput/start/Command (99.53s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-732278 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
E0318 23:18:34.005260   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-732278 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m39.527705692s)
--- PASS: TestJSONOutput/start/Command (99.53s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-732278 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-732278 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-732278 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-732278 --output=json --user=testUser: (7.345830128s)
--- PASS: TestJSONOutput/stop/Command (7.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-018708 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-018708 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (71.778889ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d44b7fb7-3e90-413e-8c7a-fc946cbcf358","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-018708] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6780902d-c85a-4820-8da1-c8961095fa5f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17786"}}
	{"specversion":"1.0","id":"0094eb17-e8c3-47bb-b8db-e9814494962f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"990c7dc1-b1b1-4131-9292-4205be3f8be5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17786-6465/kubeconfig"}}
	{"specversion":"1.0","id":"f5fbc7f9-d74f-4180-8e69-d54b10dd0617","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17786-6465/.minikube"}}
	{"specversion":"1.0","id":"52e0c433-e839-42ef-8e1d-6907ffac4bf5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0f5d6a11-1cb1-4584-bf0c-dddd377bd9b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"707ba1e1-3733-4ba7-9d9e-b8e6757482d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-018708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-018708
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (92.85s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-826888 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-826888 --driver=kvm2  --container-runtime=containerd: (45.630938987s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-830063 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-830063 --driver=kvm2  --container-runtime=containerd: (44.371095667s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-826888
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-830063
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-830063" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-830063
helpers_test.go:175: Cleaning up "first-826888" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-826888
--- PASS: TestMinikubeProfile (92.85s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (31.01s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-401018 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0318 23:20:54.742864   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/functional-522698/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-401018 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (30.005524422s)
--- PASS: TestMountStart/serial/StartWithMountFirst (31.01s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-401018 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-401018 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.55s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-418418 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-418418 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (27.550878052s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-418418 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-418418 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-401018 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-418418 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-418418 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.43s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-418418
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-418418: (1.425192165s)
--- PASS: TestMountStart/serial/Stop (1.43s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.18s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-418418
E0318 23:21:37.054738   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-418418: (23.180571353s)
--- PASS: TestMountStart/serial/RestartStopped (24.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-418418 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-418418 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (104.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-855062 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0318 23:23:34.004752   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-855062 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m43.810858458s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (104.23s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-855062 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-855062 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-855062 -- rollout status deployment/busybox: (4.430565417s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-855062 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-855062 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-855062 -- exec busybox-7fdf7869d9-qkbvf -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-855062 -- exec busybox-7fdf7869d9-rr6jb -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-855062 -- exec busybox-7fdf7869d9-qkbvf -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-855062 -- exec busybox-7fdf7869d9-rr6jb -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-855062 -- exec busybox-7fdf7869d9-qkbvf -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-855062 -- exec busybox-7fdf7869d9-rr6jb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.04s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-855062 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-855062 -- exec busybox-7fdf7869d9-qkbvf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-855062 -- exec busybox-7fdf7869d9-qkbvf -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-855062 -- exec busybox-7fdf7869d9-rr6jb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-855062 -- exec busybox-7fdf7869d9-rr6jb -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-855062 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-855062 -v 3 --alsologtostderr: (41.880665815s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.47s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-855062 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 cp testdata/cp-test.txt multinode-855062:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 ssh -n multinode-855062 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 cp multinode-855062:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3118982215/001/cp-test_multinode-855062.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 ssh -n multinode-855062 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 cp multinode-855062:/home/docker/cp-test.txt multinode-855062-m02:/home/docker/cp-test_multinode-855062_multinode-855062-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 ssh -n multinode-855062 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 ssh -n multinode-855062-m02 "sudo cat /home/docker/cp-test_multinode-855062_multinode-855062-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 cp multinode-855062:/home/docker/cp-test.txt multinode-855062-m03:/home/docker/cp-test_multinode-855062_multinode-855062-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 ssh -n multinode-855062 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 ssh -n multinode-855062-m03 "sudo cat /home/docker/cp-test_multinode-855062_multinode-855062-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 cp testdata/cp-test.txt multinode-855062-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 ssh -n multinode-855062-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 cp multinode-855062-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3118982215/001/cp-test_multinode-855062-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 ssh -n multinode-855062-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 cp multinode-855062-m02:/home/docker/cp-test.txt multinode-855062:/home/docker/cp-test_multinode-855062-m02_multinode-855062.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 ssh -n multinode-855062-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 ssh -n multinode-855062 "sudo cat /home/docker/cp-test_multinode-855062-m02_multinode-855062.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 cp multinode-855062-m02:/home/docker/cp-test.txt multinode-855062-m03:/home/docker/cp-test_multinode-855062-m02_multinode-855062-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 ssh -n multinode-855062-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 ssh -n multinode-855062-m03 "sudo cat /home/docker/cp-test_multinode-855062-m02_multinode-855062-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 cp testdata/cp-test.txt multinode-855062-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 ssh -n multinode-855062-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 cp multinode-855062-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3118982215/001/cp-test_multinode-855062-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 ssh -n multinode-855062-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 cp multinode-855062-m03:/home/docker/cp-test.txt multinode-855062:/home/docker/cp-test_multinode-855062-m03_multinode-855062.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 ssh -n multinode-855062-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 ssh -n multinode-855062 "sudo cat /home/docker/cp-test_multinode-855062-m03_multinode-855062.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 cp multinode-855062-m03:/home/docker/cp-test.txt multinode-855062-m02:/home/docker/cp-test_multinode-855062-m03_multinode-855062-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 ssh -n multinode-855062-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 ssh -n multinode-855062-m02 "sudo cat /home/docker/cp-test_multinode-855062-m03_multinode-855062-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.29s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-855062 node stop m03: (1.499113497s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-855062 status: exit status 7 (423.69458ms)

                                                
                                                
-- stdout --
	multinode-855062
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-855062-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-855062-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-855062 status --alsologtostderr: exit status 7 (431.446222ms)

                                                
                                                
-- stdout --
	multinode-855062
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-855062-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-855062-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 23:24:39.601933   37266 out.go:291] Setting OutFile to fd 1 ...
	I0318 23:24:39.602091   37266 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 23:24:39.602103   37266 out.go:304] Setting ErrFile to fd 2...
	I0318 23:24:39.602110   37266 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 23:24:39.602368   37266 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17786-6465/.minikube/bin
	I0318 23:24:39.602531   37266 out.go:298] Setting JSON to false
	I0318 23:24:39.602553   37266 mustload.go:65] Loading cluster: multinode-855062
	I0318 23:24:39.602652   37266 notify.go:220] Checking for updates...
	I0318 23:24:39.602905   37266 config.go:182] Loaded profile config "multinode-855062": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0318 23:24:39.602918   37266 status.go:255] checking status of multinode-855062 ...
	I0318 23:24:39.603404   37266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 23:24:39.603468   37266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 23:24:39.618271   37266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43577
	I0318 23:24:39.618589   37266 main.go:141] libmachine: () Calling .GetVersion
	I0318 23:24:39.619110   37266 main.go:141] libmachine: Using API Version  1
	I0318 23:24:39.619129   37266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 23:24:39.619436   37266 main.go:141] libmachine: () Calling .GetMachineName
	I0318 23:24:39.619649   37266 main.go:141] libmachine: (multinode-855062) Calling .GetState
	I0318 23:24:39.621097   37266 status.go:330] multinode-855062 host status = "Running" (err=<nil>)
	I0318 23:24:39.621115   37266 host.go:66] Checking if "multinode-855062" exists ...
	I0318 23:24:39.621418   37266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 23:24:39.621454   37266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 23:24:39.635223   37266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33079
	I0318 23:24:39.635530   37266 main.go:141] libmachine: () Calling .GetVersion
	I0318 23:24:39.635987   37266 main.go:141] libmachine: Using API Version  1
	I0318 23:24:39.636083   37266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 23:24:39.636375   37266 main.go:141] libmachine: () Calling .GetMachineName
	I0318 23:24:39.636583   37266 main.go:141] libmachine: (multinode-855062) Calling .GetIP
	I0318 23:24:39.638914   37266 main.go:141] libmachine: (multinode-855062) DBG | domain multinode-855062 has defined MAC address 52:54:00:4d:64:19 in network mk-multinode-855062
	I0318 23:24:39.639356   37266 main.go:141] libmachine: (multinode-855062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:64:19", ip: ""} in network mk-multinode-855062: {Iface:virbr1 ExpiryTime:2024-03-19 00:22:12 +0000 UTC Type:0 Mac:52:54:00:4d:64:19 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-855062 Clientid:01:52:54:00:4d:64:19}
	I0318 23:24:39.639400   37266 main.go:141] libmachine: (multinode-855062) DBG | domain multinode-855062 has defined IP address 192.168.39.186 and MAC address 52:54:00:4d:64:19 in network mk-multinode-855062
	I0318 23:24:39.639510   37266 host.go:66] Checking if "multinode-855062" exists ...
	I0318 23:24:39.639870   37266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 23:24:39.639907   37266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 23:24:39.653533   37266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37917
	I0318 23:24:39.653846   37266 main.go:141] libmachine: () Calling .GetVersion
	I0318 23:24:39.654300   37266 main.go:141] libmachine: Using API Version  1
	I0318 23:24:39.654329   37266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 23:24:39.654607   37266 main.go:141] libmachine: () Calling .GetMachineName
	I0318 23:24:39.654784   37266 main.go:141] libmachine: (multinode-855062) Calling .DriverName
	I0318 23:24:39.654932   37266 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 23:24:39.654964   37266 main.go:141] libmachine: (multinode-855062) Calling .GetSSHHostname
	I0318 23:24:39.657448   37266 main.go:141] libmachine: (multinode-855062) DBG | domain multinode-855062 has defined MAC address 52:54:00:4d:64:19 in network mk-multinode-855062
	I0318 23:24:39.657786   37266 main.go:141] libmachine: (multinode-855062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:64:19", ip: ""} in network mk-multinode-855062: {Iface:virbr1 ExpiryTime:2024-03-19 00:22:12 +0000 UTC Type:0 Mac:52:54:00:4d:64:19 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-855062 Clientid:01:52:54:00:4d:64:19}
	I0318 23:24:39.657812   37266 main.go:141] libmachine: (multinode-855062) DBG | domain multinode-855062 has defined IP address 192.168.39.186 and MAC address 52:54:00:4d:64:19 in network mk-multinode-855062
	I0318 23:24:39.657940   37266 main.go:141] libmachine: (multinode-855062) Calling .GetSSHPort
	I0318 23:24:39.658154   37266 main.go:141] libmachine: (multinode-855062) Calling .GetSSHKeyPath
	I0318 23:24:39.658302   37266 main.go:141] libmachine: (multinode-855062) Calling .GetSSHUsername
	I0318 23:24:39.658427   37266 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17786-6465/.minikube/machines/multinode-855062/id_rsa Username:docker}
	I0318 23:24:39.744841   37266 ssh_runner.go:195] Run: systemctl --version
	I0318 23:24:39.751654   37266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 23:24:39.768411   37266 kubeconfig.go:125] found "multinode-855062" server: "https://192.168.39.186:8443"
	I0318 23:24:39.768436   37266 api_server.go:166] Checking apiserver status ...
	I0318 23:24:39.768471   37266 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 23:24:39.784296   37266 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1171/cgroup
	W0318 23:24:39.798301   37266 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1171/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 23:24:39.798345   37266 ssh_runner.go:195] Run: ls
	I0318 23:24:39.803108   37266 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I0318 23:24:39.807533   37266 api_server.go:279] https://192.168.39.186:8443/healthz returned 200:
	ok
	I0318 23:24:39.807550   37266 status.go:422] multinode-855062 apiserver status = Running (err=<nil>)
	I0318 23:24:39.807560   37266 status.go:257] multinode-855062 status: &{Name:multinode-855062 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 23:24:39.807590   37266 status.go:255] checking status of multinode-855062-m02 ...
	I0318 23:24:39.807890   37266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 23:24:39.807927   37266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 23:24:39.823096   37266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43273
	I0318 23:24:39.823489   37266 main.go:141] libmachine: () Calling .GetVersion
	I0318 23:24:39.823899   37266 main.go:141] libmachine: Using API Version  1
	I0318 23:24:39.823925   37266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 23:24:39.824230   37266 main.go:141] libmachine: () Calling .GetMachineName
	I0318 23:24:39.824412   37266 main.go:141] libmachine: (multinode-855062-m02) Calling .GetState
	I0318 23:24:39.825845   37266 status.go:330] multinode-855062-m02 host status = "Running" (err=<nil>)
	I0318 23:24:39.825857   37266 host.go:66] Checking if "multinode-855062-m02" exists ...
	I0318 23:24:39.826160   37266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 23:24:39.826212   37266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 23:24:39.840016   37266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44085
	I0318 23:24:39.840376   37266 main.go:141] libmachine: () Calling .GetVersion
	I0318 23:24:39.840813   37266 main.go:141] libmachine: Using API Version  1
	I0318 23:24:39.840833   37266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 23:24:39.841208   37266 main.go:141] libmachine: () Calling .GetMachineName
	I0318 23:24:39.841386   37266 main.go:141] libmachine: (multinode-855062-m02) Calling .GetIP
	I0318 23:24:39.844006   37266 main.go:141] libmachine: (multinode-855062-m02) DBG | domain multinode-855062-m02 has defined MAC address 52:54:00:44:3b:26 in network mk-multinode-855062
	I0318 23:24:39.844434   37266 main.go:141] libmachine: (multinode-855062-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:3b:26", ip: ""} in network mk-multinode-855062: {Iface:virbr1 ExpiryTime:2024-03-19 00:23:15 +0000 UTC Type:0 Mac:52:54:00:44:3b:26 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:multinode-855062-m02 Clientid:01:52:54:00:44:3b:26}
	I0318 23:24:39.844462   37266 main.go:141] libmachine: (multinode-855062-m02) DBG | domain multinode-855062-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:44:3b:26 in network mk-multinode-855062
	I0318 23:24:39.844579   37266 host.go:66] Checking if "multinode-855062-m02" exists ...
	I0318 23:24:39.844872   37266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 23:24:39.844906   37266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 23:24:39.859038   37266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45889
	I0318 23:24:39.859366   37266 main.go:141] libmachine: () Calling .GetVersion
	I0318 23:24:39.859779   37266 main.go:141] libmachine: Using API Version  1
	I0318 23:24:39.859791   37266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 23:24:39.860117   37266 main.go:141] libmachine: () Calling .GetMachineName
	I0318 23:24:39.860288   37266 main.go:141] libmachine: (multinode-855062-m02) Calling .DriverName
	I0318 23:24:39.860463   37266 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 23:24:39.860481   37266 main.go:141] libmachine: (multinode-855062-m02) Calling .GetSSHHostname
	I0318 23:24:39.862498   37266 main.go:141] libmachine: (multinode-855062-m02) DBG | domain multinode-855062-m02 has defined MAC address 52:54:00:44:3b:26 in network mk-multinode-855062
	I0318 23:24:39.862867   37266 main.go:141] libmachine: (multinode-855062-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:3b:26", ip: ""} in network mk-multinode-855062: {Iface:virbr1 ExpiryTime:2024-03-19 00:23:15 +0000 UTC Type:0 Mac:52:54:00:44:3b:26 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:multinode-855062-m02 Clientid:01:52:54:00:44:3b:26}
	I0318 23:24:39.862889   37266 main.go:141] libmachine: (multinode-855062-m02) DBG | domain multinode-855062-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:44:3b:26 in network mk-multinode-855062
	I0318 23:24:39.862999   37266 main.go:141] libmachine: (multinode-855062-m02) Calling .GetSSHPort
	I0318 23:24:39.863157   37266 main.go:141] libmachine: (multinode-855062-m02) Calling .GetSSHKeyPath
	I0318 23:24:39.863328   37266 main.go:141] libmachine: (multinode-855062-m02) Calling .GetSSHUsername
	I0318 23:24:39.863440   37266 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17786-6465/.minikube/machines/multinode-855062-m02/id_rsa Username:docker}
	I0318 23:24:39.943956   37266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 23:24:39.959638   37266 status.go:257] multinode-855062-m02 status: &{Name:multinode-855062-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0318 23:24:39.959677   37266 status.go:255] checking status of multinode-855062-m03 ...
	I0318 23:24:39.959974   37266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 23:24:39.960015   37266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 23:24:39.975226   37266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36857
	I0318 23:24:39.975596   37266 main.go:141] libmachine: () Calling .GetVersion
	I0318 23:24:39.976053   37266 main.go:141] libmachine: Using API Version  1
	I0318 23:24:39.976074   37266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 23:24:39.976341   37266 main.go:141] libmachine: () Calling .GetMachineName
	I0318 23:24:39.976494   37266 main.go:141] libmachine: (multinode-855062-m03) Calling .GetState
	I0318 23:24:39.977811   37266 status.go:330] multinode-855062-m03 host status = "Stopped" (err=<nil>)
	I0318 23:24:39.977823   37266 status.go:343] host is not running, skipping remaining checks
	I0318 23:24:39.977830   37266 status.go:257] multinode-855062-m03 status: &{Name:multinode-855062-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.36s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (26.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-855062 node start m03 -v=7 --alsologtostderr: (25.385060629s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (26.02s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (294.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-855062
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-855062
E0318 23:25:54.742803   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/functional-522698/client.crt: no such file or directory
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-855062: (3m5.483924174s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-855062 --wait=true -v=8 --alsologtostderr
E0318 23:28:34.004394   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.crt: no such file or directory
E0318 23:28:57.788040   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/functional-522698/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-855062 --wait=true -v=8 --alsologtostderr: (1m49.255976868s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-855062
--- PASS: TestMultiNode/serial/RestartKeepsNodes (294.85s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-855062 node delete m03: (1.606314924s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.14s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (184.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 stop
E0318 23:30:54.743381   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/functional-522698/client.crt: no such file or directory
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-855062 stop: (3m3.951262075s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-855062 status: exit status 7 (91.667975ms)

                                                
                                                
-- stdout --
	multinode-855062
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-855062-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-855062 status --alsologtostderr: exit status 7 (87.341214ms)

                                                
                                                
-- stdout --
	multinode-855062
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-855062-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 23:33:07.077657   39366 out.go:291] Setting OutFile to fd 1 ...
	I0318 23:33:07.078170   39366 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 23:33:07.078192   39366 out.go:304] Setting ErrFile to fd 2...
	I0318 23:33:07.078199   39366 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 23:33:07.078661   39366 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17786-6465/.minikube/bin
	I0318 23:33:07.079061   39366 out.go:298] Setting JSON to false
	I0318 23:33:07.079146   39366 mustload.go:65] Loading cluster: multinode-855062
	I0318 23:33:07.079256   39366 notify.go:220] Checking for updates...
	I0318 23:33:07.079559   39366 config.go:182] Loaded profile config "multinode-855062": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0318 23:33:07.079574   39366 status.go:255] checking status of multinode-855062 ...
	I0318 23:33:07.079931   39366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 23:33:07.079987   39366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 23:33:07.094288   39366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46729
	I0318 23:33:07.094644   39366 main.go:141] libmachine: () Calling .GetVersion
	I0318 23:33:07.095167   39366 main.go:141] libmachine: Using API Version  1
	I0318 23:33:07.095184   39366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 23:33:07.095550   39366 main.go:141] libmachine: () Calling .GetMachineName
	I0318 23:33:07.095765   39366 main.go:141] libmachine: (multinode-855062) Calling .GetState
	I0318 23:33:07.097189   39366 status.go:330] multinode-855062 host status = "Stopped" (err=<nil>)
	I0318 23:33:07.097203   39366 status.go:343] host is not running, skipping remaining checks
	I0318 23:33:07.097210   39366 status.go:257] multinode-855062 status: &{Name:multinode-855062 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 23:33:07.097263   39366 status.go:255] checking status of multinode-855062-m02 ...
	I0318 23:33:07.097528   39366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0318 23:33:07.097572   39366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 23:33:07.111247   39366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33355
	I0318 23:33:07.111552   39366 main.go:141] libmachine: () Calling .GetVersion
	I0318 23:33:07.111955   39366 main.go:141] libmachine: Using API Version  1
	I0318 23:33:07.111974   39366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 23:33:07.112227   39366 main.go:141] libmachine: () Calling .GetMachineName
	I0318 23:33:07.112401   39366 main.go:141] libmachine: (multinode-855062-m02) Calling .GetState
	I0318 23:33:07.113732   39366 status.go:330] multinode-855062-m02 host status = "Stopped" (err=<nil>)
	I0318 23:33:07.113747   39366 status.go:343] host is not running, skipping remaining checks
	I0318 23:33:07.113752   39366 status.go:257] multinode-855062-m02 status: &{Name:multinode-855062-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (184.13s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (78.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-855062 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0318 23:33:34.004503   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-855062 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m17.493450588s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-855062 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (78.02s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-855062
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-855062-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-855062-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (70.568727ms)

                                                
                                                
-- stdout --
	* [multinode-855062-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17786
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17786-6465/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17786-6465/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-855062-m02' is duplicated with machine name 'multinode-855062-m02' in profile 'multinode-855062'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-855062-m03 --driver=kvm2  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-855062-m03 --driver=kvm2  --container-runtime=containerd: (44.218717946s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-855062
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-855062: exit status 80 (212.664458ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-855062 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-855062-m03 already exists in multinode-855062-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-855062-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.53s)

                                                
                                    
x
+
TestPreload (379.17s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-986559 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0318 23:35:54.743251   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/functional-522698/client.crt: no such file or directory
E0318 23:38:17.054930   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.crt: no such file or directory
E0318 23:38:34.005160   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-986559 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (3m37.59851134s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-986559 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-986559 image pull gcr.io/k8s-minikube/busybox: (3.020288079s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-986559
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-986559: (1m31.678840472s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-986559 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
E0318 23:40:54.742662   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/functional-522698/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-986559 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (1m5.781582612s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-986559 image list
helpers_test.go:175: Cleaning up "test-preload-986559" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-986559
--- PASS: TestPreload (379.17s)

                                                
                                    
x
+
TestScheduledStopUnix (118.16s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-672154 --memory=2048 --driver=kvm2  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-672154 --memory=2048 --driver=kvm2  --container-runtime=containerd: (46.452281231s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-672154 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-672154 -n scheduled-stop-672154
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-672154 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-672154 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-672154 -n scheduled-stop-672154
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-672154
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-672154 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-672154
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-672154: exit status 7 (74.611378ms)

                                                
                                                
-- stdout --
	scheduled-stop-672154
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-672154 -n scheduled-stop-672154
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-672154 -n scheduled-stop-672154: exit status 7 (72.287368ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-672154" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-672154
--- PASS: TestScheduledStopUnix (118.16s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (204.1s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1331721368 start -p running-upgrade-868597 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E0318 23:43:34.004637   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1331721368 start -p running-upgrade-868597 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m13.264502396s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-868597 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0318 23:45:54.743597   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/functional-522698/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-868597 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m5.646484609s)
helpers_test.go:175: Cleaning up "running-upgrade-868597" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-868597
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-868597: (1.18710434s)
--- PASS: TestRunningBinaryUpgrade (204.10s)

                                                
                                    
x
+
TestKubernetesUpgrade (199.93s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-968225 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-968225 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m0.10649437s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-968225
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-968225: (2.335251574s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-968225 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-968225 status --format={{.Host}}: exit status 7 (88.545955ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-968225 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-968225 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m51.090957098s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-968225 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-968225 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-968225 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (138.913553ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-968225] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17786
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17786-6465/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17786-6465/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-968225
	    minikube start -p kubernetes-upgrade-968225 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9682252 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-968225 --kubernetes-version=v1.30.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-968225 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-968225 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (24.823232648s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-968225" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-968225
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-968225: (1.254711708s)
--- PASS: TestKubernetesUpgrade (199.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-737017 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-737017 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (95.471592ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-737017] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17786
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17786-6465/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17786-6465/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (100.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-737017 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-737017 --driver=kvm2  --container-runtime=containerd: (1m40.414221787s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-737017 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (100.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (49.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-737017 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-737017 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (48.317710361s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-737017 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-737017 status -o json: exit status 2 (230.490973ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-737017","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-737017
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-737017: (1.020647673s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (49.57s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
E0318 23:45:37.788430   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/functional-522698/client.crt: no such file or directory
--- PASS: TestStoppedBinaryUpgrade/Setup (2.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (156.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2573523297 start -p stopped-upgrade-647649 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2573523297 start -p stopped-upgrade-647649 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m1.442157353s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2573523297 -p stopped-upgrade-647649 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2573523297 -p stopped-upgrade-647649 stop: (2.12772756s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-647649 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-647649 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m32.920237504s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (156.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (39.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-737017 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-737017 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (39.92877839s)
--- PASS: TestNoKubernetes/serial/Start (39.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-737017 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-737017 "sudo systemctl is-active --quiet service kubelet": exit status 1 (227.940225ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (5.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (4.728585971s)
--- PASS: TestNoKubernetes/serial/ProfileList (5.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-737017
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-737017: (1.652566799s)
--- PASS: TestNoKubernetes/serial/Stop (1.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (41.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-737017 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-737017 --driver=kvm2  --container-runtime=containerd: (41.277805612s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (41.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-800784 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-800784 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (117.64097ms)

                                                
                                                
-- stdout --
	* [false-800784] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17786
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17786-6465/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17786-6465/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 23:46:56.980516   46445 out.go:291] Setting OutFile to fd 1 ...
	I0318 23:46:56.980752   46445 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 23:46:56.980761   46445 out.go:304] Setting ErrFile to fd 2...
	I0318 23:46:56.980765   46445 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 23:46:56.980959   46445 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17786-6465/.minikube/bin
	I0318 23:46:56.981480   46445 out.go:298] Setting JSON to false
	I0318 23:46:56.982322   46445 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":5360,"bootTime":1710800257,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 23:46:56.982377   46445 start.go:139] virtualization: kvm guest
	I0318 23:46:56.984663   46445 out.go:177] * [false-800784] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 23:46:56.986221   46445 out.go:177]   - MINIKUBE_LOCATION=17786
	I0318 23:46:56.987531   46445 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 23:46:56.986247   46445 notify.go:220] Checking for updates...
	I0318 23:46:56.990099   46445 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17786-6465/kubeconfig
	I0318 23:46:56.991501   46445 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17786-6465/.minikube
	I0318 23:46:56.992974   46445 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 23:46:56.994185   46445 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 23:46:56.995723   46445 config.go:182] Loaded profile config "NoKubernetes-737017": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I0318 23:46:56.995830   46445 config.go:182] Loaded profile config "force-systemd-env-928981": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0318 23:46:56.995928   46445 config.go:182] Loaded profile config "stopped-upgrade-647649": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.1
	I0318 23:46:56.996013   46445 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 23:46:57.032616   46445 out.go:177] * Using the kvm2 driver based on user configuration
	I0318 23:46:57.033903   46445 start.go:297] selected driver: kvm2
	I0318 23:46:57.033914   46445 start.go:901] validating driver "kvm2" against <nil>
	I0318 23:46:57.033934   46445 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 23:46:57.035903   46445 out.go:177] 
	W0318 23:46:57.037211   46445 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0318 23:46:57.038566   46445 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-800784 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-800784

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-800784

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-800784

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-800784

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-800784

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-800784

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-800784

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-800784

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-800784

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-800784

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800784"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800784"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800784"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-800784

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800784"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800784"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-800784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-800784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-800784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-800784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-800784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-800784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-800784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-800784" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800784"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800784"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800784"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800784"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800784"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-800784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-800784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-800784" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800784"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800784"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800784"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800784"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800784"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-800784

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800784"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800784"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800784"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800784"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800784"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800784"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800784"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800784"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800784"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800784"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800784"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800784"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800784"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800784"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800784"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800784"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800784"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800784"

                                                
                                                
----------------------- debugLogs end: false-800784 [took: 2.79108009s] --------------------------------
helpers_test.go:175: Cleaning up "false-800784" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-800784
--- PASS: TestNetworkPlugins/group/false (3.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-737017 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-737017 "sudo systemctl is-active --quiet service kubelet": exit status 1 (197.971817ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestPause/serial/Start (118.44s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-659808 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-659808 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m58.443186663s)
--- PASS: TestPause/serial/Start (118.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-647649
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (127.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-800784 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-800784 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (2m7.153062395s)
--- PASS: TestNetworkPlugins/group/auto/Start (127.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (70.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-800784 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-800784 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m10.357133844s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (70.36s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (53.01s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-659808 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0318 23:50:54.743344   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/functional-522698/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-659808 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (52.986302763s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (53.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-8tjzr" [530db1de-8876-4f19-a32d-53396d724734] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005560295s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestPause/serial/Pause (0.73s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-659808 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.73s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-659808 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-659808 --output=json --layout=cluster: exit status 2 (254.524786ms)

                                                
                                                
-- stdout --
	{"Name":"pause-659808","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-659808","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.25s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-659808 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.70s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.88s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-659808 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.88s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.8s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-659808 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.80s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.55s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (103.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-800784 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-800784 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m43.759206221s)
--- PASS: TestNetworkPlugins/group/calico/Start (103.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-800784 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-800784 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zwkwl" [a266372b-177a-4226-bca6-c95df5633c1c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-zwkwl" [a266372b-177a-4226-bca6-c95df5633c1c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.156054808s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-800784 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-800784 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-qjnjg" [61d6f887-78bf-45c1-aff6-59fc28ea122d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-qjnjg" [61d6f887-78bf-45c1-aff6-59fc28ea122d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.005618819s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-800784 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-800784 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-800784 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-800784 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-800784 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-800784 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (95.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-800784 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-800784 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m35.958061003s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (95.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (95.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-800784 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-800784 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m35.734388439s)
--- PASS: TestNetworkPlugins/group/bridge/Start (95.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (106.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-800784 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-800784 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m46.703128953s)
--- PASS: TestNetworkPlugins/group/flannel/Start (106.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-szfjg" [b20f1680-824f-46d9-8735-631e71b61459] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005048584s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-800784 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-800784 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-wcglb" [74cf4f36-645f-409f-930a-04b8e8f53890] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-wcglb" [74cf4f36-645f-409f-930a-04b8e8f53890] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.051146968s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-800784 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-800784 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-800784 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-800784 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-800784 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-nh8m9" [7a13450a-5143-40db-a561-43866d4df30e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-nh8m9" [7a13450a-5143-40db-a561-43866d4df30e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.0051984s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-800784 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-800784 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-tv28p" [5cbd1008-f83e-4fad-94e5-5dcbb29cb318] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-tv28p" [5cbd1008-f83e-4fad-94e5-5dcbb29cb318] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.008593915s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-800784 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-800784 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-800784 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-800784 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-800784 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-800784 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (73.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-800784 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
E0318 23:53:34.005173   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-800784 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m13.224305502s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (73.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (176.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-095323 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-095323 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m56.825290365s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (176.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (232.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-305688 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-305688 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0: (3m52.229310613s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (232.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-sxhsz" [1359d8e0-7308-481a-a2c9-587714727073] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004432903s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-800784 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-800784 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lcngn" [92552a92-56a0-42b4-b90b-afaa925a3568] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-lcngn" [92552a92-56a0-42b4-b90b-afaa925a3568] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.047719982s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-800784 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-800784 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-800784 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-800784 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-800784 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-mpx9r" [6ca71676-b106-4194-afeb-d4108e6f0e5e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-mpx9r" [6ca71676-b106-4194-afeb-d4108e6f0e5e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004617838s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (107.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-500975 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-500975 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3: (1m47.734048595s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (107.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-800784 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-800784 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-800784 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)
E0319 00:04:11.015739   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/flannel-800784/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (66.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-674584 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3
E0318 23:55:54.743323   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/functional-522698/client.crt: no such file or directory
E0318 23:56:05.817690   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/kindnet-800784/client.crt: no such file or directory
E0318 23:56:05.822960   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/kindnet-800784/client.crt: no such file or directory
E0318 23:56:05.833218   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/kindnet-800784/client.crt: no such file or directory
E0318 23:56:05.853538   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/kindnet-800784/client.crt: no such file or directory
E0318 23:56:05.893891   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/kindnet-800784/client.crt: no such file or directory
E0318 23:56:05.974407   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/kindnet-800784/client.crt: no such file or directory
E0318 23:56:06.134879   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/kindnet-800784/client.crt: no such file or directory
E0318 23:56:06.455214   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/kindnet-800784/client.crt: no such file or directory
E0318 23:56:07.095817   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/kindnet-800784/client.crt: no such file or directory
E0318 23:56:08.376632   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/kindnet-800784/client.crt: no such file or directory
E0318 23:56:10.937616   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/kindnet-800784/client.crt: no such file or directory
E0318 23:56:13.888163   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/auto-800784/client.crt: no such file or directory
E0318 23:56:13.893440   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/auto-800784/client.crt: no such file or directory
E0318 23:56:13.903677   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/auto-800784/client.crt: no such file or directory
E0318 23:56:13.923961   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/auto-800784/client.crt: no such file or directory
E0318 23:56:13.964230   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/auto-800784/client.crt: no such file or directory
E0318 23:56:14.044714   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/auto-800784/client.crt: no such file or directory
E0318 23:56:14.205086   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/auto-800784/client.crt: no such file or directory
E0318 23:56:14.525744   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/auto-800784/client.crt: no such file or directory
E0318 23:56:15.166360   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/auto-800784/client.crt: no such file or directory
E0318 23:56:16.057778   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/kindnet-800784/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-674584 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3: (1m6.276335396s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (66.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-674584 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e2566490-a616-47dc-9095-d412110f11a4] Pending
E0318 23:56:16.447492   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/auto-800784/client.crt: no such file or directory
helpers_test.go:344: "busybox" [e2566490-a616-47dc-9095-d412110f11a4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0318 23:56:19.008192   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/auto-800784/client.crt: no such file or directory
helpers_test.go:344: "busybox" [e2566490-a616-47dc-9095-d412110f11a4] Running
E0318 23:56:24.128734   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/auto-800784/client.crt: no such file or directory
E0318 23:56:26.298203   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/kindnet-800784/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 12.004293576s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-674584 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-674584 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-674584 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.143111543s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-674584 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (92.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-674584 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-674584 --alsologtostderr -v=3: (1m32.521536279s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (92.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-500975 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [eb90fd6a-969b-4454-952c-d9046649a505] Pending
helpers_test.go:344: "busybox" [eb90fd6a-969b-4454-952c-d9046649a505] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0318 23:56:34.369331   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/auto-800784/client.crt: no such file or directory
helpers_test.go:344: "busybox" [eb90fd6a-969b-4454-952c-d9046649a505] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.005429366s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-500975 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-095323 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bb4ec4a8-de78-438c-9110-ba0db70ae60a] Pending
helpers_test.go:344: "busybox" [bb4ec4a8-de78-438c-9110-ba0db70ae60a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bb4ec4a8-de78-438c-9110-ba0db70ae60a] Running
E0318 23:56:46.778679   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/kindnet-800784/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004142076s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-095323 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-500975 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-500975 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (92.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-500975 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-500975 --alsologtostderr -v=3: (1m32.476617157s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (92.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-095323 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-095323 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (92.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-095323 --alsologtostderr -v=3
E0318 23:56:54.849989   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/auto-800784/client.crt: no such file or directory
E0318 23:57:27.739123   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/kindnet-800784/client.crt: no such file or directory
E0318 23:57:35.811096   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/auto-800784/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-095323 --alsologtostderr -v=3: (1m32.486771047s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (92.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-305688 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [81940183-922c-4fc0-bd98-2e35603d60b2] Pending
helpers_test.go:344: "busybox" [81940183-922c-4fc0-bd98-2e35603d60b2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [81940183-922c-4fc0-bd98-2e35603d60b2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.004590234s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-305688 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-305688 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-305688 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (92.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-305688 --alsologtostderr -v=3
E0318 23:57:55.017231   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/calico-800784/client.crt: no such file or directory
E0318 23:57:55.022467   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/calico-800784/client.crt: no such file or directory
E0318 23:57:55.032725   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/calico-800784/client.crt: no such file or directory
E0318 23:57:55.052931   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/calico-800784/client.crt: no such file or directory
E0318 23:57:55.093149   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/calico-800784/client.crt: no such file or directory
E0318 23:57:55.173408   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/calico-800784/client.crt: no such file or directory
E0318 23:57:55.333947   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/calico-800784/client.crt: no such file or directory
E0318 23:57:55.654676   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/calico-800784/client.crt: no such file or directory
E0318 23:57:56.295355   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/calico-800784/client.crt: no such file or directory
E0318 23:57:57.576474   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/calico-800784/client.crt: no such file or directory
E0318 23:58:00.137365   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/calico-800784/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-305688 --alsologtostderr -v=3: (1m32.473808152s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (92.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-674584 -n default-k8s-diff-port-674584
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-674584 -n default-k8s-diff-port-674584: exit status 7 (73.863925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-674584 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (297.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-674584 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3
E0318 23:58:05.257637   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/calico-800784/client.crt: no such file or directory
E0318 23:58:14.402563   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/custom-flannel-800784/client.crt: no such file or directory
E0318 23:58:14.407793   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/custom-flannel-800784/client.crt: no such file or directory
E0318 23:58:14.418008   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/custom-flannel-800784/client.crt: no such file or directory
E0318 23:58:14.438228   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/custom-flannel-800784/client.crt: no such file or directory
E0318 23:58:14.478635   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/custom-flannel-800784/client.crt: no such file or directory
E0318 23:58:14.558979   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/custom-flannel-800784/client.crt: no such file or directory
E0318 23:58:14.719491   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/custom-flannel-800784/client.crt: no such file or directory
E0318 23:58:15.040305   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/custom-flannel-800784/client.crt: no such file or directory
E0318 23:58:15.497868   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/calico-800784/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-674584 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3: (4m56.822692335s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-674584 -n default-k8s-diff-port-674584
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (297.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-500975 -n embed-certs-500975
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-500975 -n embed-certs-500975: exit status 7 (74.184426ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-500975 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0318 23:58:15.680916   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/custom-flannel-800784/client.crt: no such file or directory
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (326.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-500975 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3
E0318 23:58:16.961530   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/custom-flannel-800784/client.crt: no such file or directory
E0318 23:58:17.658594   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/bridge-800784/client.crt: no such file or directory
E0318 23:58:17.663894   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/bridge-800784/client.crt: no such file or directory
E0318 23:58:17.674121   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/bridge-800784/client.crt: no such file or directory
E0318 23:58:17.694437   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/bridge-800784/client.crt: no such file or directory
E0318 23:58:17.734978   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/bridge-800784/client.crt: no such file or directory
E0318 23:58:17.815223   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/bridge-800784/client.crt: no such file or directory
E0318 23:58:17.975687   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/bridge-800784/client.crt: no such file or directory
E0318 23:58:18.296295   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/bridge-800784/client.crt: no such file or directory
E0318 23:58:18.936721   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/bridge-800784/client.crt: no such file or directory
E0318 23:58:19.522470   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/custom-flannel-800784/client.crt: no such file or directory
E0318 23:58:20.217233   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/bridge-800784/client.crt: no such file or directory
E0318 23:58:22.778167   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/bridge-800784/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-500975 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.3: (5m26.048667212s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-500975 -n embed-certs-500975
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (326.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-095323 -n old-k8s-version-095323
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-095323 -n old-k8s-version-095323: exit status 7 (86.624417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-095323 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (599.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-095323 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
E0318 23:58:24.643434   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/custom-flannel-800784/client.crt: no such file or directory
E0318 23:58:27.898382   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/bridge-800784/client.crt: no such file or directory
E0318 23:58:34.004395   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.crt: no such file or directory
E0318 23:58:34.883919   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/custom-flannel-800784/client.crt: no such file or directory
E0318 23:58:35.978983   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/calico-800784/client.crt: no such file or directory
E0318 23:58:38.139435   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/bridge-800784/client.crt: no such file or directory
E0318 23:58:49.659690   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/kindnet-800784/client.crt: no such file or directory
E0318 23:58:55.365138   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/custom-flannel-800784/client.crt: no such file or directory
E0318 23:58:57.731921   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/auto-800784/client.crt: no such file or directory
E0318 23:58:58.619755   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/bridge-800784/client.crt: no such file or directory
E0318 23:59:11.015786   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/flannel-800784/client.crt: no such file or directory
E0318 23:59:11.021110   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/flannel-800784/client.crt: no such file or directory
E0318 23:59:11.031353   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/flannel-800784/client.crt: no such file or directory
E0318 23:59:11.051604   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/flannel-800784/client.crt: no such file or directory
E0318 23:59:11.091903   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/flannel-800784/client.crt: no such file or directory
E0318 23:59:11.172184   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/flannel-800784/client.crt: no such file or directory
E0318 23:59:11.332322   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/flannel-800784/client.crt: no such file or directory
E0318 23:59:11.653056   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/flannel-800784/client.crt: no such file or directory
E0318 23:59:12.293668   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/flannel-800784/client.crt: no such file or directory
E0318 23:59:13.574189   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/flannel-800784/client.crt: no such file or directory
E0318 23:59:16.134807   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/flannel-800784/client.crt: no such file or directory
E0318 23:59:16.939146   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/calico-800784/client.crt: no such file or directory
E0318 23:59:21.255561   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/flannel-800784/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-095323 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (9m59.686785919s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-095323 -n old-k8s-version-095323
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (599.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-305688 -n no-preload-305688
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-305688 -n no-preload-305688: exit status 7 (85.167376ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-305688 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (321.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-305688 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0
E0318 23:59:31.496690   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/flannel-800784/client.crt: no such file or directory
E0318 23:59:36.325905   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/custom-flannel-800784/client.crt: no such file or directory
E0318 23:59:39.580837   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/bridge-800784/client.crt: no such file or directory
E0318 23:59:42.714795   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/enable-default-cni-800784/client.crt: no such file or directory
E0318 23:59:42.720075   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/enable-default-cni-800784/client.crt: no such file or directory
E0318 23:59:42.730340   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/enable-default-cni-800784/client.crt: no such file or directory
E0318 23:59:42.751382   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/enable-default-cni-800784/client.crt: no such file or directory
E0318 23:59:42.791637   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/enable-default-cni-800784/client.crt: no such file or directory
E0318 23:59:42.872481   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/enable-default-cni-800784/client.crt: no such file or directory
E0318 23:59:43.033119   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/enable-default-cni-800784/client.crt: no such file or directory
E0318 23:59:43.353816   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/enable-default-cni-800784/client.crt: no such file or directory
E0318 23:59:43.993974   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/enable-default-cni-800784/client.crt: no such file or directory
E0318 23:59:45.274835   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/enable-default-cni-800784/client.crt: no such file or directory
E0318 23:59:47.835549   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/enable-default-cni-800784/client.crt: no such file or directory
E0318 23:59:51.977556   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/flannel-800784/client.crt: no such file or directory
E0318 23:59:52.956238   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/enable-default-cni-800784/client.crt: no such file or directory
E0319 00:00:03.196956   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/enable-default-cni-800784/client.crt: no such file or directory
E0319 00:00:23.677627   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/enable-default-cni-800784/client.crt: no such file or directory
E0319 00:00:32.938006   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/flannel-800784/client.crt: no such file or directory
E0319 00:00:38.859580   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/calico-800784/client.crt: no such file or directory
E0319 00:00:54.743523   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/functional-522698/client.crt: no such file or directory
E0319 00:00:58.246461   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/custom-flannel-800784/client.crt: no such file or directory
E0319 00:01:01.501642   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/bridge-800784/client.crt: no such file or directory
E0319 00:01:04.638727   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/enable-default-cni-800784/client.crt: no such file or directory
E0319 00:01:05.817778   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/kindnet-800784/client.crt: no such file or directory
E0319 00:01:13.888146   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/auto-800784/client.crt: no such file or directory
E0319 00:01:33.500148   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/kindnet-800784/client.crt: no such file or directory
E0319 00:01:41.572769   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/auto-800784/client.crt: no such file or directory
E0319 00:01:54.858567   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/flannel-800784/client.crt: no such file or directory
E0319 00:02:17.788997   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/functional-522698/client.crt: no such file or directory
E0319 00:02:26.559289   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/enable-default-cni-800784/client.crt: no such file or directory
E0319 00:02:55.017213   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/calico-800784/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-305688 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0: (5m21.1414943s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-305688 -n no-preload-305688
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (321.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-9xn67" [114ee842-2113-4e0e-b9a4-92b92beedc3b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004857592s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-9xn67" [114ee842-2113-4e0e-b9a4-92b92beedc3b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00529667s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-674584 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-674584 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-674584 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-674584 -n default-k8s-diff-port-674584
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-674584 -n default-k8s-diff-port-674584: exit status 2 (259.618158ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-674584 -n default-k8s-diff-port-674584
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-674584 -n default-k8s-diff-port-674584: exit status 2 (280.349269ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-674584 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-674584 -n default-k8s-diff-port-674584
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-674584 -n default-k8s-diff-port-674584
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (60.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-040038 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0
E0319 00:03:17.658438   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/bridge-800784/client.crt: no such file or directory
E0319 00:03:22.700725   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/calico-800784/client.crt: no such file or directory
E0319 00:03:34.004640   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-040038 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0: (1m0.216816676s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (60.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (19.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-9nw6m" [22ab952f-eb70-4af6-b716-898634af34bd] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0319 00:03:42.086604   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/custom-flannel-800784/client.crt: no such file or directory
E0319 00:03:45.342769   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/bridge-800784/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-9nw6m" [22ab952f-eb70-4af6-b716-898634af34bd] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 19.004760054s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (19.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-9nw6m" [22ab952f-eb70-4af6-b716-898634af34bd] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006721229s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-500975 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-500975 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-500975 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-500975 -n embed-certs-500975
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-500975 -n embed-certs-500975: exit status 2 (255.310342ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-500975 -n embed-certs-500975
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-500975 -n embed-certs-500975: exit status 2 (252.224621ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-500975 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-500975 -n embed-certs-500975
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-500975 -n embed-certs-500975
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-040038 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-040038 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.043648338s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-040038 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-040038 --alsologtostderr -v=3: (2.438356383s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-040038 -n newest-cni-040038
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-040038 -n newest-cni-040038: exit status 7 (74.224089ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-040038 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (40.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-040038 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0
E0319 00:04:38.699456   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/flannel-800784/client.crt: no such file or directory
E0319 00:04:42.714968   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/enable-default-cni-800784/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-040038 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0: (39.784059078s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-040038 -n newest-cni-040038
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (40.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-656kt" [01a527e1-05ee-4683-89e7-5b7d27f2e5e2] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004400648s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-656kt" [01a527e1-05ee-4683-89e7-5b7d27f2e5e2] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004724852s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-305688 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-305688 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-305688 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-305688 -n no-preload-305688
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-305688 -n no-preload-305688: exit status 2 (250.404769ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-305688 -n no-preload-305688
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-305688 -n no-preload-305688: exit status 2 (249.410951ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-305688 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-305688 -n no-preload-305688
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-305688 -n no-preload-305688
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-040038 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-040038 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-040038 -n newest-cni-040038
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-040038 -n newest-cni-040038: exit status 2 (243.829424ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-040038 -n newest-cni-040038
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-040038 -n newest-cni-040038: exit status 2 (240.868645ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-040038 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-040038 -n newest-cni-040038
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-040038 -n newest-cni-040038
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-7mtpc" [44e804e7-1c54-4901-91f6-f6545dc1ec84] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003414446s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-7mtpc" [44e804e7-1c54-4901-91f6-f6545dc1ec84] Running
E0319 00:08:34.004932   13738 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17786-6465/.minikube/profiles/addons-935788/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00415086s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-095323 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-095323 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-095323 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-095323 -n old-k8s-version-095323
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-095323 -n old-k8s-version-095323: exit status 2 (237.289041ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-095323 -n old-k8s-version-095323
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-095323 -n old-k8s-version-095323: exit status 2 (236.497879ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-095323 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-095323 -n old-k8s-version-095323
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-095323 -n old-k8s-version-095323
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.46s)

                                                
                                    

Test skip (39/333)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.29.3/cached-images 0
15 TestDownloadOnly/v1.29.3/binaries 0
16 TestDownloadOnly/v1.29.3/kubectl 0
23 TestDownloadOnly/v1.30.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.30.0-beta.0/binaries 0
25 TestDownloadOnly/v1.30.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
145 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
146 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
147 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
148 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
149 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
150 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
151 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
152 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
184 TestImageBuild 0
211 TestKicCustomNetwork 0
212 TestKicExistingNetwork 0
213 TestKicCustomSubnet 0
214 TestKicStaticIP 0
246 TestChangeNoneUser 0
249 TestScheduledStopWindows 0
251 TestSkaffold 0
253 TestInsufficientStorage 0
257 TestMissingContainerUpgrade 0
271 TestNetworkPlugins/group/kubenet 3.33
279 TestNetworkPlugins/group/cilium 3.31
289 TestStartStop/group/disable-driver-mounts 0.15
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-800784 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-800784

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-800784

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-800784

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-800784

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-800784

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-800784

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-800784

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-800784

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-800784

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-800784

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800784"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800784"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800784"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-800784

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800784"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800784"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-800784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-800784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-800784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-800784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-800784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-800784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-800784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-800784" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800784"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800784"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800784"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800784"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800784"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-800784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-800784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-800784" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800784"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800784"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800784"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800784"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800784"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-800784

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800784"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800784"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800784"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800784"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800784"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800784"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800784"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800784"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800784"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800784"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800784"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800784"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800784"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800784"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800784"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800784"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800784"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800784"

                                                
                                                
----------------------- debugLogs end: kubenet-800784 [took: 3.181428507s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-800784" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-800784
--- SKIP: TestNetworkPlugins/group/kubenet (3.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-800784 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-800784

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-800784

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-800784

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-800784

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-800784

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-800784

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-800784

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-800784

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-800784

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-800784

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800784"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800784"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800784"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-800784

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800784"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800784"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-800784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-800784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-800784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-800784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-800784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-800784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-800784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-800784" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800784"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800784"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800784"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800784"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800784"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-800784

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-800784

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-800784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-800784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-800784

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-800784

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-800784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-800784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-800784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-800784" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-800784" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800784"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800784"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800784"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800784"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800784"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-800784

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800784"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800784"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800784"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800784"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800784"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800784"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800784"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800784"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800784"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800784"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800784"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800784"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800784"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800784"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800784"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800784"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800784"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-800784" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800784"

                                                
                                                
----------------------- debugLogs end: cilium-800784 [took: 3.158864539s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-800784" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-800784
--- SKIP: TestNetworkPlugins/group/cilium (3.31s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-616257" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-616257
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard