Test Report: KVM_Linux_containerd 17174

                    
                      7689d73509a567ada6f3653fa0ef2156acc9a338:2023-09-07:30902
                    
                

Test fail (1/302)

Order failed test Duration
24 TestAddons/parallel/Registry 23.38
x
+
TestAddons/parallel/Registry (23.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 26.873813ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-crq7x" [1215db8a-b169-4deb-a49a-11998b9284ea] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.01718038s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-pv7gw" [b92465d6-7cfe-40b4-a367-789f7718636f] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.020045633s
addons_test.go:316: (dbg) Run:  kubectl --context addons-594533 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-594533 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-594533 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (10.023040632s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-amd64 -p addons-594533 ip
2023/09/06 23:41:28 [DEBUG] GET http://192.168.39.126:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p addons-594533 addons disable registry --alsologtostderr -v=1
addons_test.go:364: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-594533 addons disable registry --alsologtostderr -v=1: exit status 11 (320.115591ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 23:41:28.462446   15219 out.go:296] Setting OutFile to fd 1 ...
	I0906 23:41:28.462585   15219 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 23:41:28.462595   15219 out.go:309] Setting ErrFile to fd 2...
	I0906 23:41:28.462602   15219 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 23:41:28.462807   15219 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6521/.minikube/bin
	I0906 23:41:28.463064   15219 addons.go:594] checking whether the cluster is paused
	I0906 23:41:28.463391   15219 config.go:182] Loaded profile config "addons-594533": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
	I0906 23:41:28.463417   15219 host.go:66] Checking if "addons-594533" exists ...
	I0906 23:41:28.463760   15219 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0906 23:41:28.463801   15219 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:41:28.477477   15219 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45037
	I0906 23:41:28.477906   15219 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:41:28.478574   15219 main.go:141] libmachine: Using API Version  1
	I0906 23:41:28.478591   15219 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:41:28.478915   15219 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:41:28.479131   15219 main.go:141] libmachine: (addons-594533) Calling .GetState
	I0906 23:41:28.480604   15219 main.go:141] libmachine: (addons-594533) Calling .DriverName
	I0906 23:41:28.480814   15219 ssh_runner.go:195] Run: systemctl --version
	I0906 23:41:28.480839   15219 main.go:141] libmachine: (addons-594533) Calling .GetSSHHostname
	I0906 23:41:28.483036   15219 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:41:28.483437   15219 main.go:141] libmachine: (addons-594533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:6b:36", ip: ""} in network mk-addons-594533: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:58 +0000 UTC Type:0 Mac:52:54:00:de:6b:36 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:addons-594533 Clientid:01:52:54:00:de:6b:36}
	I0906 23:41:28.483467   15219 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined IP address 192.168.39.126 and MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:41:28.483577   15219 main.go:141] libmachine: (addons-594533) Calling .GetSSHPort
	I0906 23:41:28.483721   15219 main.go:141] libmachine: (addons-594533) Calling .GetSSHKeyPath
	I0906 23:41:28.483867   15219 main.go:141] libmachine: (addons-594533) Calling .GetSSHUsername
	I0906 23:41:28.483994   15219 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6521/.minikube/machines/addons-594533/id_rsa Username:docker}
	I0906 23:41:28.564028   15219 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0906 23:41:28.564112   15219 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 23:41:28.619544   15219 cri.go:89] found id: "ad7e7c33e0f2987a7dfd1837e6a0f88cd0799ab03eee56f53fb7947b09d2b165"
	I0906 23:41:28.619567   15219 cri.go:89] found id: "7c32cea2841b0bc3f6c5d435fdb9acaae42377f926a1fe7ba3124a707486bc36"
	I0906 23:41:28.619574   15219 cri.go:89] found id: "3c08fc9aa94a598ae5f740e88fa139dbbb4adae2fdc30ecf9a8b763771385422"
	I0906 23:41:28.619580   15219 cri.go:89] found id: "5ee40711bac35fdc1d608d463ed5819f0c7f0703fd09c705786e9873120f5645"
	I0906 23:41:28.619585   15219 cri.go:89] found id: "6ba0213a3e90057c39b9efe097fb4e10fd58c19192e27441e1a301c4135c7368"
	I0906 23:41:28.619590   15219 cri.go:89] found id: "efeee0e32800c33ec7b54cb238503bce7c7a80f573dede951ecef9428ae52ee7"
	I0906 23:41:28.619596   15219 cri.go:89] found id: "0058c3a7e868b0a8fe5c3f7407849186ebe1fd00b6937fe1ecaaaa787f4d7533"
	I0906 23:41:28.619606   15219 cri.go:89] found id: "a5ae000e6a920884d01019c2becf51a951de950d9a98dfe3e627b28983d8a1d7"
	I0906 23:41:28.619613   15219 cri.go:89] found id: "4da248514bdb91da39f294f54e210f788909c7a10b38064d47bf30a1a8292c51"
	I0906 23:41:28.619629   15219 cri.go:89] found id: "93d3714dbaf8bf66e2b0ea41ce2263cd04f4e4eeb7914cb1e84b01e4231e683a"
	I0906 23:41:28.619634   15219 cri.go:89] found id: "c647c734b5d07b5ef245fdf3ce957ca89bf1318303eb0956d8681770f706732b"
	I0906 23:41:28.619639   15219 cri.go:89] found id: "2142372f1f5ba5e3fe2d0620e8de900dc59aaef3712a4d106b0b20c5800082c8"
	I0906 23:41:28.619648   15219 cri.go:89] found id: "a622452e5ae275ee99208ba9ee123fbe97ace4774a83fc576a6c3dc062a7853e"
	I0906 23:41:28.619653   15219 cri.go:89] found id: "1f7d7e2decc17b0dee41e3bbffc34ecbd59dd9c4d51efa12a244a99af997d80b"
	I0906 23:41:28.619666   15219 cri.go:89] found id: "706f82c1f995b31c35fa0b7541744315b7a27c49e9bcd60c4ef51d8fd1e2331b"
	I0906 23:41:28.619675   15219 cri.go:89] found id: "f71af02b65734462b3c43e191f49a95ebc62b196bc0c6d1d9362c6bf05d2cf70"
	I0906 23:41:28.619681   15219 cri.go:89] found id: "a61ae532d8087936bea7e43e0978fb3077452e7e7e82a534cd767307dd80a038"
	I0906 23:41:28.619693   15219 cri.go:89] found id: "12e619be78929e24934a00061f7043846fa2f3a1f793649e062d8ce4e8638c29"
	I0906 23:41:28.619702   15219 cri.go:89] found id: "058cf62f54b6140088f509ec75d7699f2471b81916f76baa6749713947e2d378"
	I0906 23:41:28.619707   15219 cri.go:89] found id: "6c151abce636c9b3d55ded8300646a7e9f75c4899ca7b6834194bf68c90e1f04"
	I0906 23:41:28.619716   15219 cri.go:89] found id: "11d32dc02c2c323f85c2f8b3523227c87a0a2d7fc45b83ef57d59c98655430ff"
	I0906 23:41:28.619721   15219 cri.go:89] found id: ""
	I0906 23:41:28.619766   15219 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0906 23:41:28.735234   15219 main.go:141] libmachine: Making call to close driver server
	I0906 23:41:28.735258   15219 main.go:141] libmachine: (addons-594533) Calling .Close
	I0906 23:41:28.735573   15219 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:41:28.735595   15219 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:41:28.738434   15219 out.go:177] 
	W0906 23:41:28.739802   15219 out.go:239] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-09-06T23:41:28Z" level=error msg="stat /run/containerd/runc/k8s.io/b575e05c104ede0837594807b52f2388530355c4bc92a175d2aec4d7468ff207: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-09-06T23:41:28Z" level=error msg="stat /run/containerd/runc/k8s.io/b575e05c104ede0837594807b52f2388530355c4bc92a175d2aec4d7468ff207: no such file or directory"
	
	W0906 23:41:28.739824   15219 out.go:239] * 
	* 
	W0906 23:41:28.742818   15219 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 23:41:28.744534   15219 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:366: failed to disable registry addon. args "out/minikube-linux-amd64 -p addons-594533 addons disable registry --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-594533 -n addons-594533
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-594533 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-594533 logs -n 25: (1.919993567s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-783127 | jenkins | v1.31.2 | 06 Sep 23 23:37 UTC |                     |
	|         | -p download-only-783127        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-783127 | jenkins | v1.31.2 | 06 Sep 23 23:38 UTC |                     |
	|         | -p download-only-783127        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.31.2 | 06 Sep 23 23:38 UTC | 06 Sep 23 23:38 UTC |
	| delete  | -p download-only-783127        | download-only-783127 | jenkins | v1.31.2 | 06 Sep 23 23:38 UTC | 06 Sep 23 23:38 UTC |
	| delete  | -p download-only-783127        | download-only-783127 | jenkins | v1.31.2 | 06 Sep 23 23:38 UTC | 06 Sep 23 23:38 UTC |
	| start   | --download-only -p             | binary-mirror-706469 | jenkins | v1.31.2 | 06 Sep 23 23:38 UTC |                     |
	|         | binary-mirror-706469           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:38425         |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-706469        | binary-mirror-706469 | jenkins | v1.31.2 | 06 Sep 23 23:38 UTC | 06 Sep 23 23:38 UTC |
	| start   | -p addons-594533               | addons-594533        | jenkins | v1.31.2 | 06 Sep 23 23:38 UTC | 06 Sep 23 23:41 UTC |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	|         | --addons=helm-tiller           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-594533        | jenkins | v1.31.2 | 06 Sep 23 23:41 UTC | 06 Sep 23 23:41 UTC |
	|         | addons-594533                  |                      |         |         |                     |                     |
	| addons  | addons-594533 addons           | addons-594533        | jenkins | v1.31.2 | 06 Sep 23 23:41 UTC | 06 Sep 23 23:41 UTC |
	|         | disable metrics-server         |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-594533        | jenkins | v1.31.2 | 06 Sep 23 23:41 UTC | 06 Sep 23 23:41 UTC |
	|         | addons-594533                  |                      |         |         |                     |                     |
	| addons  | enable headlamp                | addons-594533        | jenkins | v1.31.2 | 06 Sep 23 23:41 UTC | 06 Sep 23 23:41 UTC |
	|         | -p addons-594533               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| ip      | addons-594533 ip               | addons-594533        | jenkins | v1.31.2 | 06 Sep 23 23:41 UTC | 06 Sep 23 23:41 UTC |
	| addons  | addons-594533 addons disable   | addons-594533        | jenkins | v1.31.2 | 06 Sep 23 23:41 UTC |                     |
	|         | registry --alsologtostderr     |                      |         |         |                     |                     |
	|         | -v=1                           |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/06 23:38:41
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 23:38:41.953882   14148 out.go:296] Setting OutFile to fd 1 ...
	I0906 23:38:41.953976   14148 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 23:38:41.953984   14148 out.go:309] Setting ErrFile to fd 2...
	I0906 23:38:41.953988   14148 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 23:38:41.954181   14148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6521/.minikube/bin
	I0906 23:38:41.954692   14148 out.go:303] Setting JSON to false
	I0906 23:38:41.955426   14148 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":1268,"bootTime":1694042254,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 23:38:41.955474   14148 start.go:138] virtualization: kvm guest
	I0906 23:38:41.957374   14148 out.go:177] * [addons-594533] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0906 23:38:41.958698   14148 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 23:38:41.959911   14148 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 23:38:41.958723   14148 notify.go:220] Checking for updates...
	I0906 23:38:41.962351   14148 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17174-6521/kubeconfig
	I0906 23:38:41.963631   14148 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6521/.minikube
	I0906 23:38:41.965346   14148 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 23:38:41.966708   14148 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 23:38:41.968176   14148 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 23:38:41.997609   14148 out.go:177] * Using the kvm2 driver based on user configuration
	I0906 23:38:41.998876   14148 start.go:298] selected driver: kvm2
	I0906 23:38:41.998887   14148 start.go:902] validating driver "kvm2" against <nil>
	I0906 23:38:41.998896   14148 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 23:38:41.999493   14148 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 23:38:41.999554   14148 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17174-6521/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0906 23:38:42.012165   14148 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0906 23:38:42.012199   14148 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 23:38:42.012379   14148 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 23:38:42.012408   14148 cni.go:84] Creating CNI manager for ""
	I0906 23:38:42.012416   14148 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0906 23:38:42.012427   14148 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 23:38:42.012437   14148 start_flags.go:321] config:
	{Name:addons-594533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:addons-594533 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 23:38:42.012538   14148 iso.go:125] acquiring lock: {Name:mk888fe4d8846e15e5fb0d4239da695971e7f3d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 23:38:42.014118   14148 out.go:177] * Starting control plane node addons-594533 in cluster addons-594533
	I0906 23:38:42.015324   14148 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime containerd
	I0906 23:38:42.015355   14148 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17174-6521/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-containerd-overlay2-amd64.tar.lz4
	I0906 23:38:42.015368   14148 cache.go:57] Caching tarball of preloaded images
	I0906 23:38:42.015440   14148 preload.go:174] Found /home/jenkins/minikube-integration/17174-6521/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 23:38:42.015453   14148 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on containerd
	I0906 23:38:42.015700   14148 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/config.json ...
	I0906 23:38:42.015719   14148 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/config.json: {Name:mk71766de68270408c8e3a996929e3f2460edff3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:38:42.015851   14148 start.go:365] acquiring machines lock for addons-594533: {Name:mk73d57975a1c0fad3e2247053eb144a6fff9966 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 23:38:42.015911   14148 start.go:369] acquired machines lock for "addons-594533" in 44.28µs
	I0906 23:38:42.015935   14148 start.go:93] Provisioning new machine with config: &{Name:addons-594533 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:addons-594533 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0906 23:38:42.015999   14148 start.go:125] createHost starting for "" (driver="kvm2")
	I0906 23:38:42.017570   14148 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0906 23:38:42.017675   14148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0906 23:38:42.017716   14148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:38:42.029811   14148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40329
	I0906 23:38:42.030199   14148 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:38:42.030730   14148 main.go:141] libmachine: Using API Version  1
	I0906 23:38:42.030744   14148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:38:42.031118   14148 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:38:42.031289   14148 main.go:141] libmachine: (addons-594533) Calling .GetMachineName
	I0906 23:38:42.031448   14148 main.go:141] libmachine: (addons-594533) Calling .DriverName
	I0906 23:38:42.031606   14148 start.go:159] libmachine.API.Create for "addons-594533" (driver="kvm2")
	I0906 23:38:42.031675   14148 client.go:168] LocalClient.Create starting
	I0906 23:38:42.031714   14148 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17174-6521/.minikube/certs/ca.pem
	I0906 23:38:42.115743   14148 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17174-6521/.minikube/certs/cert.pem
	I0906 23:38:42.165305   14148 main.go:141] libmachine: Running pre-create checks...
	I0906 23:38:42.165327   14148 main.go:141] libmachine: (addons-594533) Calling .PreCreateCheck
	I0906 23:38:42.165849   14148 main.go:141] libmachine: (addons-594533) Calling .GetConfigRaw
	I0906 23:38:42.166284   14148 main.go:141] libmachine: Creating machine...
	I0906 23:38:42.166300   14148 main.go:141] libmachine: (addons-594533) Calling .Create
	I0906 23:38:42.166440   14148 main.go:141] libmachine: (addons-594533) Creating KVM machine...
	I0906 23:38:42.167572   14148 main.go:141] libmachine: (addons-594533) DBG | found existing default KVM network
	I0906 23:38:42.168254   14148 main.go:141] libmachine: (addons-594533) DBG | I0906 23:38:42.168115   14170 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000298c0}
	I0906 23:38:42.173272   14148 main.go:141] libmachine: (addons-594533) DBG | trying to create private KVM network mk-addons-594533 192.168.39.0/24...
	I0906 23:38:42.237341   14148 main.go:141] libmachine: (addons-594533) DBG | private KVM network mk-addons-594533 192.168.39.0/24 created
	I0906 23:38:42.237369   14148 main.go:141] libmachine: (addons-594533) DBG | I0906 23:38:42.237282   14170 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17174-6521/.minikube
	I0906 23:38:42.237383   14148 main.go:141] libmachine: (addons-594533) Setting up store path in /home/jenkins/minikube-integration/17174-6521/.minikube/machines/addons-594533 ...
	I0906 23:38:42.237401   14148 main.go:141] libmachine: (addons-594533) Building disk image from file:///home/jenkins/minikube-integration/17174-6521/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso
	I0906 23:38:42.237418   14148 main.go:141] libmachine: (addons-594533) Downloading /home/jenkins/minikube-integration/17174-6521/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17174-6521/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso...
	I0906 23:38:42.468748   14148 main.go:141] libmachine: (addons-594533) DBG | I0906 23:38:42.468618   14170 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17174-6521/.minikube/machines/addons-594533/id_rsa...
	I0906 23:38:42.644018   14148 main.go:141] libmachine: (addons-594533) DBG | I0906 23:38:42.643887   14170 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17174-6521/.minikube/machines/addons-594533/addons-594533.rawdisk...
	I0906 23:38:42.644057   14148 main.go:141] libmachine: (addons-594533) DBG | Writing magic tar header
	I0906 23:38:42.644074   14148 main.go:141] libmachine: (addons-594533) DBG | Writing SSH key tar header
	I0906 23:38:42.644093   14148 main.go:141] libmachine: (addons-594533) DBG | I0906 23:38:42.644009   14170 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17174-6521/.minikube/machines/addons-594533 ...
	I0906 23:38:42.644112   14148 main.go:141] libmachine: (addons-594533) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17174-6521/.minikube/machines/addons-594533
	I0906 23:38:42.644176   14148 main.go:141] libmachine: (addons-594533) Setting executable bit set on /home/jenkins/minikube-integration/17174-6521/.minikube/machines/addons-594533 (perms=drwx------)
	I0906 23:38:42.644209   14148 main.go:141] libmachine: (addons-594533) Setting executable bit set on /home/jenkins/minikube-integration/17174-6521/.minikube/machines (perms=drwxr-xr-x)
	I0906 23:38:42.644225   14148 main.go:141] libmachine: (addons-594533) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17174-6521/.minikube/machines
	I0906 23:38:42.644243   14148 main.go:141] libmachine: (addons-594533) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17174-6521/.minikube
	I0906 23:38:42.644257   14148 main.go:141] libmachine: (addons-594533) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17174-6521
	I0906 23:38:42.644276   14148 main.go:141] libmachine: (addons-594533) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0906 23:38:42.644289   14148 main.go:141] libmachine: (addons-594533) DBG | Checking permissions on dir: /home/jenkins
	I0906 23:38:42.644306   14148 main.go:141] libmachine: (addons-594533) Setting executable bit set on /home/jenkins/minikube-integration/17174-6521/.minikube (perms=drwxr-xr-x)
	I0906 23:38:42.644325   14148 main.go:141] libmachine: (addons-594533) Setting executable bit set on /home/jenkins/minikube-integration/17174-6521 (perms=drwxrwxr-x)
	I0906 23:38:42.644338   14148 main.go:141] libmachine: (addons-594533) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0906 23:38:42.644351   14148 main.go:141] libmachine: (addons-594533) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0906 23:38:42.644359   14148 main.go:141] libmachine: (addons-594533) Creating domain...
	I0906 23:38:42.644366   14148 main.go:141] libmachine: (addons-594533) DBG | Checking permissions on dir: /home
	I0906 23:38:42.644378   14148 main.go:141] libmachine: (addons-594533) DBG | Skipping /home - not owner
	I0906 23:38:42.645312   14148 main.go:141] libmachine: (addons-594533) define libvirt domain using xml: 
	I0906 23:38:42.645337   14148 main.go:141] libmachine: (addons-594533) <domain type='kvm'>
	I0906 23:38:42.645347   14148 main.go:141] libmachine: (addons-594533)   <name>addons-594533</name>
	I0906 23:38:42.645366   14148 main.go:141] libmachine: (addons-594533)   <memory unit='MiB'>4000</memory>
	I0906 23:38:42.645399   14148 main.go:141] libmachine: (addons-594533)   <vcpu>2</vcpu>
	I0906 23:38:42.645416   14148 main.go:141] libmachine: (addons-594533)   <features>
	I0906 23:38:42.645424   14148 main.go:141] libmachine: (addons-594533)     <acpi/>
	I0906 23:38:42.645430   14148 main.go:141] libmachine: (addons-594533)     <apic/>
	I0906 23:38:42.645441   14148 main.go:141] libmachine: (addons-594533)     <pae/>
	I0906 23:38:42.645450   14148 main.go:141] libmachine: (addons-594533)     
	I0906 23:38:42.645462   14148 main.go:141] libmachine: (addons-594533)   </features>
	I0906 23:38:42.645474   14148 main.go:141] libmachine: (addons-594533)   <cpu mode='host-passthrough'>
	I0906 23:38:42.645497   14148 main.go:141] libmachine: (addons-594533)   
	I0906 23:38:42.645511   14148 main.go:141] libmachine: (addons-594533)   </cpu>
	I0906 23:38:42.645530   14148 main.go:141] libmachine: (addons-594533)   <os>
	I0906 23:38:42.645549   14148 main.go:141] libmachine: (addons-594533)     <type>hvm</type>
	I0906 23:38:42.645571   14148 main.go:141] libmachine: (addons-594533)     <boot dev='cdrom'/>
	I0906 23:38:42.645582   14148 main.go:141] libmachine: (addons-594533)     <boot dev='hd'/>
	I0906 23:38:42.645591   14148 main.go:141] libmachine: (addons-594533)     <bootmenu enable='no'/>
	I0906 23:38:42.645608   14148 main.go:141] libmachine: (addons-594533)   </os>
	I0906 23:38:42.645623   14148 main.go:141] libmachine: (addons-594533)   <devices>
	I0906 23:38:42.645646   14148 main.go:141] libmachine: (addons-594533)     <disk type='file' device='cdrom'>
	I0906 23:38:42.645665   14148 main.go:141] libmachine: (addons-594533)       <source file='/home/jenkins/minikube-integration/17174-6521/.minikube/machines/addons-594533/boot2docker.iso'/>
	I0906 23:38:42.645679   14148 main.go:141] libmachine: (addons-594533)       <target dev='hdc' bus='scsi'/>
	I0906 23:38:42.645693   14148 main.go:141] libmachine: (addons-594533)       <readonly/>
	I0906 23:38:42.645706   14148 main.go:141] libmachine: (addons-594533)     </disk>
	I0906 23:38:42.645723   14148 main.go:141] libmachine: (addons-594533)     <disk type='file' device='disk'>
	I0906 23:38:42.645746   14148 main.go:141] libmachine: (addons-594533)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0906 23:38:42.645766   14148 main.go:141] libmachine: (addons-594533)       <source file='/home/jenkins/minikube-integration/17174-6521/.minikube/machines/addons-594533/addons-594533.rawdisk'/>
	I0906 23:38:42.645779   14148 main.go:141] libmachine: (addons-594533)       <target dev='hda' bus='virtio'/>
	I0906 23:38:42.645793   14148 main.go:141] libmachine: (addons-594533)     </disk>
	I0906 23:38:42.645805   14148 main.go:141] libmachine: (addons-594533)     <interface type='network'>
	I0906 23:38:42.645818   14148 main.go:141] libmachine: (addons-594533)       <source network='mk-addons-594533'/>
	I0906 23:38:42.645826   14148 main.go:141] libmachine: (addons-594533)       <model type='virtio'/>
	I0906 23:38:42.645836   14148 main.go:141] libmachine: (addons-594533)     </interface>
	I0906 23:38:42.645851   14148 main.go:141] libmachine: (addons-594533)     <interface type='network'>
	I0906 23:38:42.645865   14148 main.go:141] libmachine: (addons-594533)       <source network='default'/>
	I0906 23:38:42.645877   14148 main.go:141] libmachine: (addons-594533)       <model type='virtio'/>
	I0906 23:38:42.645890   14148 main.go:141] libmachine: (addons-594533)     </interface>
	I0906 23:38:42.645902   14148 main.go:141] libmachine: (addons-594533)     <serial type='pty'>
	I0906 23:38:42.645914   14148 main.go:141] libmachine: (addons-594533)       <target port='0'/>
	I0906 23:38:42.645925   14148 main.go:141] libmachine: (addons-594533)     </serial>
	I0906 23:38:42.645942   14148 main.go:141] libmachine: (addons-594533)     <console type='pty'>
	I0906 23:38:42.645962   14148 main.go:141] libmachine: (addons-594533)       <target type='serial' port='0'/>
	I0906 23:38:42.645976   14148 main.go:141] libmachine: (addons-594533)     </console>
	I0906 23:38:42.645989   14148 main.go:141] libmachine: (addons-594533)     <rng model='virtio'>
	I0906 23:38:42.646005   14148 main.go:141] libmachine: (addons-594533)       <backend model='random'>/dev/random</backend>
	I0906 23:38:42.646041   14148 main.go:141] libmachine: (addons-594533)     </rng>
	I0906 23:38:42.646056   14148 main.go:141] libmachine: (addons-594533)     
	I0906 23:38:42.646069   14148 main.go:141] libmachine: (addons-594533)     
	I0906 23:38:42.646088   14148 main.go:141] libmachine: (addons-594533)   </devices>
	I0906 23:38:42.646100   14148 main.go:141] libmachine: (addons-594533) </domain>
	I0906 23:38:42.646116   14148 main.go:141] libmachine: (addons-594533) 
	I0906 23:38:42.651454   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:26:d9:ca in network default
	I0906 23:38:42.651975   14148 main.go:141] libmachine: (addons-594533) Ensuring networks are active...
	I0906 23:38:42.652006   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:38:42.652525   14148 main.go:141] libmachine: (addons-594533) Ensuring network default is active
	I0906 23:38:42.652812   14148 main.go:141] libmachine: (addons-594533) Ensuring network mk-addons-594533 is active
	I0906 23:38:42.653293   14148 main.go:141] libmachine: (addons-594533) Getting domain xml...
	I0906 23:38:42.653921   14148 main.go:141] libmachine: (addons-594533) Creating domain...
	I0906 23:38:44.070915   14148 main.go:141] libmachine: (addons-594533) Waiting to get IP...
	I0906 23:38:44.071578   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:38:44.071903   14148 main.go:141] libmachine: (addons-594533) DBG | unable to find current IP address of domain addons-594533 in network mk-addons-594533
	I0906 23:38:44.071940   14148 main.go:141] libmachine: (addons-594533) DBG | I0906 23:38:44.071899   14170 retry.go:31] will retry after 311.116068ms: waiting for machine to come up
	I0906 23:38:44.384222   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:38:44.384616   14148 main.go:141] libmachine: (addons-594533) DBG | unable to find current IP address of domain addons-594533 in network mk-addons-594533
	I0906 23:38:44.384652   14148 main.go:141] libmachine: (addons-594533) DBG | I0906 23:38:44.384578   14170 retry.go:31] will retry after 388.257596ms: waiting for machine to come up
	I0906 23:38:44.774044   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:38:44.774510   14148 main.go:141] libmachine: (addons-594533) DBG | unable to find current IP address of domain addons-594533 in network mk-addons-594533
	I0906 23:38:44.774539   14148 main.go:141] libmachine: (addons-594533) DBG | I0906 23:38:44.774455   14170 retry.go:31] will retry after 333.887066ms: waiting for machine to come up
	I0906 23:38:45.110041   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:38:45.110501   14148 main.go:141] libmachine: (addons-594533) DBG | unable to find current IP address of domain addons-594533 in network mk-addons-594533
	I0906 23:38:45.110528   14148 main.go:141] libmachine: (addons-594533) DBG | I0906 23:38:45.110455   14170 retry.go:31] will retry after 471.895899ms: waiting for machine to come up
	I0906 23:38:45.584206   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:38:45.584656   14148 main.go:141] libmachine: (addons-594533) DBG | unable to find current IP address of domain addons-594533 in network mk-addons-594533
	I0906 23:38:45.584691   14148 main.go:141] libmachine: (addons-594533) DBG | I0906 23:38:45.584618   14170 retry.go:31] will retry after 657.883885ms: waiting for machine to come up
	I0906 23:38:46.244760   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:38:46.245293   14148 main.go:141] libmachine: (addons-594533) DBG | unable to find current IP address of domain addons-594533 in network mk-addons-594533
	I0906 23:38:46.245321   14148 main.go:141] libmachine: (addons-594533) DBG | I0906 23:38:46.245249   14170 retry.go:31] will retry after 831.778981ms: waiting for machine to come up
	I0906 23:38:47.078688   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:38:47.079185   14148 main.go:141] libmachine: (addons-594533) DBG | unable to find current IP address of domain addons-594533 in network mk-addons-594533
	I0906 23:38:47.079214   14148 main.go:141] libmachine: (addons-594533) DBG | I0906 23:38:47.079136   14170 retry.go:31] will retry after 935.891329ms: waiting for machine to come up
	I0906 23:38:48.016147   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:38:48.016519   14148 main.go:141] libmachine: (addons-594533) DBG | unable to find current IP address of domain addons-594533 in network mk-addons-594533
	I0906 23:38:48.016546   14148 main.go:141] libmachine: (addons-594533) DBG | I0906 23:38:48.016466   14170 retry.go:31] will retry after 1.020055369s: waiting for machine to come up
	I0906 23:38:49.038644   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:38:49.039008   14148 main.go:141] libmachine: (addons-594533) DBG | unable to find current IP address of domain addons-594533 in network mk-addons-594533
	I0906 23:38:49.039035   14148 main.go:141] libmachine: (addons-594533) DBG | I0906 23:38:49.038975   14170 retry.go:31] will retry after 1.828413656s: waiting for machine to come up
	I0906 23:38:50.869989   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:38:50.870411   14148 main.go:141] libmachine: (addons-594533) DBG | unable to find current IP address of domain addons-594533 in network mk-addons-594533
	I0906 23:38:50.870436   14148 main.go:141] libmachine: (addons-594533) DBG | I0906 23:38:50.870364   14170 retry.go:31] will retry after 2.252836628s: waiting for machine to come up
	I0906 23:38:53.124796   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:38:53.125237   14148 main.go:141] libmachine: (addons-594533) DBG | unable to find current IP address of domain addons-594533 in network mk-addons-594533
	I0906 23:38:53.125267   14148 main.go:141] libmachine: (addons-594533) DBG | I0906 23:38:53.125185   14170 retry.go:31] will retry after 1.817256855s: waiting for machine to come up
	I0906 23:38:54.943533   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:38:54.943975   14148 main.go:141] libmachine: (addons-594533) DBG | unable to find current IP address of domain addons-594533 in network mk-addons-594533
	I0906 23:38:54.944000   14148 main.go:141] libmachine: (addons-594533) DBG | I0906 23:38:54.943928   14170 retry.go:31] will retry after 3.109303415s: waiting for machine to come up
	I0906 23:38:58.054562   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:38:58.054915   14148 main.go:141] libmachine: (addons-594533) DBG | unable to find current IP address of domain addons-594533 in network mk-addons-594533
	I0906 23:38:58.054942   14148 main.go:141] libmachine: (addons-594533) DBG | I0906 23:38:58.054865   14170 retry.go:31] will retry after 3.196053468s: waiting for machine to come up
	I0906 23:39:01.255104   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:01.255453   14148 main.go:141] libmachine: (addons-594533) DBG | unable to find current IP address of domain addons-594533 in network mk-addons-594533
	I0906 23:39:01.255485   14148 main.go:141] libmachine: (addons-594533) DBG | I0906 23:39:01.255403   14170 retry.go:31] will retry after 5.618278078s: waiting for machine to come up
	I0906 23:39:06.878696   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:06.879062   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has current primary IP address 192.168.39.126 and MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:06.879100   14148 main.go:141] libmachine: (addons-594533) Found IP for machine: 192.168.39.126
	I0906 23:39:06.879114   14148 main.go:141] libmachine: (addons-594533) Reserving static IP address...
	I0906 23:39:06.879396   14148 main.go:141] libmachine: (addons-594533) DBG | unable to find host DHCP lease matching {name: "addons-594533", mac: "52:54:00:de:6b:36", ip: "192.168.39.126"} in network mk-addons-594533
	I0906 23:39:06.946694   14148 main.go:141] libmachine: (addons-594533) DBG | Getting to WaitForSSH function...
	I0906 23:39:06.946725   14148 main.go:141] libmachine: (addons-594533) Reserved static IP address: 192.168.39.126
	I0906 23:39:06.946740   14148 main.go:141] libmachine: (addons-594533) Waiting for SSH to be available...
	I0906 23:39:06.948910   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:06.949222   14148 main.go:141] libmachine: (addons-594533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:6b:36", ip: ""} in network mk-addons-594533: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:58 +0000 UTC Type:0 Mac:52:54:00:de:6b:36 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:minikube Clientid:01:52:54:00:de:6b:36}
	I0906 23:39:06.949253   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined IP address 192.168.39.126 and MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:06.949410   14148 main.go:141] libmachine: (addons-594533) DBG | Using SSH client type: external
	I0906 23:39:06.949431   14148 main.go:141] libmachine: (addons-594533) DBG | Using SSH private key: /home/jenkins/minikube-integration/17174-6521/.minikube/machines/addons-594533/id_rsa (-rw-------)
	I0906 23:39:06.949451   14148 main.go:141] libmachine: (addons-594533) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.126 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17174-6521/.minikube/machines/addons-594533/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 23:39:06.949468   14148 main.go:141] libmachine: (addons-594533) DBG | About to run SSH command:
	I0906 23:39:06.949476   14148 main.go:141] libmachine: (addons-594533) DBG | exit 0
	I0906 23:39:07.041575   14148 main.go:141] libmachine: (addons-594533) DBG | SSH cmd err, output: <nil>: 
	I0906 23:39:07.041833   14148 main.go:141] libmachine: (addons-594533) KVM machine creation complete!
	I0906 23:39:07.042129   14148 main.go:141] libmachine: (addons-594533) Calling .GetConfigRaw
	I0906 23:39:07.042626   14148 main.go:141] libmachine: (addons-594533) Calling .DriverName
	I0906 23:39:07.042816   14148 main.go:141] libmachine: (addons-594533) Calling .DriverName
	I0906 23:39:07.042964   14148 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0906 23:39:07.042984   14148 main.go:141] libmachine: (addons-594533) Calling .GetState
	I0906 23:39:07.044074   14148 main.go:141] libmachine: Detecting operating system of created instance...
	I0906 23:39:07.044088   14148 main.go:141] libmachine: Waiting for SSH to be available...
	I0906 23:39:07.044094   14148 main.go:141] libmachine: Getting to WaitForSSH function...
	I0906 23:39:07.044101   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHHostname
	I0906 23:39:07.046000   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:07.046344   14148 main.go:141] libmachine: (addons-594533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:6b:36", ip: ""} in network mk-addons-594533: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:58 +0000 UTC Type:0 Mac:52:54:00:de:6b:36 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:addons-594533 Clientid:01:52:54:00:de:6b:36}
	I0906 23:39:07.046379   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined IP address 192.168.39.126 and MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:07.046519   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHPort
	I0906 23:39:07.046706   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHKeyPath
	I0906 23:39:07.046848   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHKeyPath
	I0906 23:39:07.047001   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHUsername
	I0906 23:39:07.047168   14148 main.go:141] libmachine: Using SSH client type: native
	I0906 23:39:07.047565   14148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.126 22 <nil> <nil>}
	I0906 23:39:07.047583   14148 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0906 23:39:07.152871   14148 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 23:39:07.152897   14148 main.go:141] libmachine: Detecting the provisioner...
	I0906 23:39:07.152905   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHHostname
	I0906 23:39:07.155208   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:07.155497   14148 main.go:141] libmachine: (addons-594533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:6b:36", ip: ""} in network mk-addons-594533: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:58 +0000 UTC Type:0 Mac:52:54:00:de:6b:36 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:addons-594533 Clientid:01:52:54:00:de:6b:36}
	I0906 23:39:07.155539   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined IP address 192.168.39.126 and MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:07.155680   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHPort
	I0906 23:39:07.155871   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHKeyPath
	I0906 23:39:07.156033   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHKeyPath
	I0906 23:39:07.156169   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHUsername
	I0906 23:39:07.156327   14148 main.go:141] libmachine: Using SSH client type: native
	I0906 23:39:07.156760   14148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.126 22 <nil> <nil>}
	I0906 23:39:07.156773   14148 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0906 23:39:07.266901   14148 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g88b5c50-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0906 23:39:07.266993   14148 main.go:141] libmachine: found compatible host: buildroot
	I0906 23:39:07.267005   14148 main.go:141] libmachine: Provisioning with buildroot...
	I0906 23:39:07.267015   14148 main.go:141] libmachine: (addons-594533) Calling .GetMachineName
	I0906 23:39:07.267244   14148 buildroot.go:166] provisioning hostname "addons-594533"
	I0906 23:39:07.267267   14148 main.go:141] libmachine: (addons-594533) Calling .GetMachineName
	I0906 23:39:07.267445   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHHostname
	I0906 23:39:07.269726   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:07.270065   14148 main.go:141] libmachine: (addons-594533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:6b:36", ip: ""} in network mk-addons-594533: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:58 +0000 UTC Type:0 Mac:52:54:00:de:6b:36 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:addons-594533 Clientid:01:52:54:00:de:6b:36}
	I0906 23:39:07.270099   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined IP address 192.168.39.126 and MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:07.270237   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHPort
	I0906 23:39:07.270397   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHKeyPath
	I0906 23:39:07.270545   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHKeyPath
	I0906 23:39:07.270714   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHUsername
	I0906 23:39:07.270888   14148 main.go:141] libmachine: Using SSH client type: native
	I0906 23:39:07.271267   14148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.126 22 <nil> <nil>}
	I0906 23:39:07.271279   14148 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-594533 && echo "addons-594533" | sudo tee /etc/hostname
	I0906 23:39:07.389369   14148 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-594533
	
	I0906 23:39:07.389398   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHHostname
	I0906 23:39:07.391947   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:07.392226   14148 main.go:141] libmachine: (addons-594533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:6b:36", ip: ""} in network mk-addons-594533: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:58 +0000 UTC Type:0 Mac:52:54:00:de:6b:36 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:addons-594533 Clientid:01:52:54:00:de:6b:36}
	I0906 23:39:07.392257   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined IP address 192.168.39.126 and MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:07.392442   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHPort
	I0906 23:39:07.392627   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHKeyPath
	I0906 23:39:07.392772   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHKeyPath
	I0906 23:39:07.392912   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHUsername
	I0906 23:39:07.393068   14148 main.go:141] libmachine: Using SSH client type: native
	I0906 23:39:07.393452   14148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.126 22 <nil> <nil>}
	I0906 23:39:07.393470   14148 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-594533' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-594533/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-594533' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 23:39:07.509804   14148 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 23:39:07.509834   14148 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17174-6521/.minikube CaCertPath:/home/jenkins/minikube-integration/17174-6521/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17174-6521/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17174-6521/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17174-6521/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17174-6521/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17174-6521/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17174-6521/.minikube}
	I0906 23:39:07.509897   14148 buildroot.go:174] setting up certificates
	I0906 23:39:07.509906   14148 provision.go:83] configureAuth start
	I0906 23:39:07.509919   14148 main.go:141] libmachine: (addons-594533) Calling .GetMachineName
	I0906 23:39:07.510172   14148 main.go:141] libmachine: (addons-594533) Calling .GetIP
	I0906 23:39:07.512509   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:07.512794   14148 main.go:141] libmachine: (addons-594533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:6b:36", ip: ""} in network mk-addons-594533: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:58 +0000 UTC Type:0 Mac:52:54:00:de:6b:36 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:addons-594533 Clientid:01:52:54:00:de:6b:36}
	I0906 23:39:07.512816   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined IP address 192.168.39.126 and MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:07.512929   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHHostname
	I0906 23:39:07.514976   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:07.515319   14148 main.go:141] libmachine: (addons-594533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:6b:36", ip: ""} in network mk-addons-594533: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:58 +0000 UTC Type:0 Mac:52:54:00:de:6b:36 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:addons-594533 Clientid:01:52:54:00:de:6b:36}
	I0906 23:39:07.515353   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined IP address 192.168.39.126 and MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:07.515492   14148 provision.go:138] copyHostCerts
	I0906 23:39:07.515560   14148 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6521/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17174-6521/.minikube/ca.pem (1082 bytes)
	I0906 23:39:07.515667   14148 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6521/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17174-6521/.minikube/cert.pem (1123 bytes)
	I0906 23:39:07.515730   14148 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-6521/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17174-6521/.minikube/key.pem (1675 bytes)
	I0906 23:39:07.515772   14148 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17174-6521/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17174-6521/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17174-6521/.minikube/certs/ca-key.pem org=jenkins.addons-594533 san=[192.168.39.126 192.168.39.126 localhost 127.0.0.1 minikube addons-594533]
	I0906 23:39:07.962974   14148 provision.go:172] copyRemoteCerts
	I0906 23:39:07.963029   14148 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 23:39:07.963055   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHHostname
	I0906 23:39:07.965600   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:07.965943   14148 main.go:141] libmachine: (addons-594533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:6b:36", ip: ""} in network mk-addons-594533: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:58 +0000 UTC Type:0 Mac:52:54:00:de:6b:36 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:addons-594533 Clientid:01:52:54:00:de:6b:36}
	I0906 23:39:07.965979   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined IP address 192.168.39.126 and MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:07.966100   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHPort
	I0906 23:39:07.966309   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHKeyPath
	I0906 23:39:07.966468   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHUsername
	I0906 23:39:07.966584   14148 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6521/.minikube/machines/addons-594533/id_rsa Username:docker}
	I0906 23:39:08.051769   14148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6521/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 23:39:08.074202   14148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6521/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 23:39:08.095897   14148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6521/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0906 23:39:08.117299   14148 provision.go:86] duration metric: configureAuth took 607.381234ms
	I0906 23:39:08.117316   14148 buildroot.go:189] setting minikube options for container-runtime
	I0906 23:39:08.117468   14148 config.go:182] Loaded profile config "addons-594533": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
	I0906 23:39:08.117486   14148 main.go:141] libmachine: Checking connection to Docker...
	I0906 23:39:08.117496   14148 main.go:141] libmachine: (addons-594533) Calling .GetURL
	I0906 23:39:08.118585   14148 main.go:141] libmachine: (addons-594533) DBG | Using libvirt version 6000000
	I0906 23:39:08.120430   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:08.120854   14148 main.go:141] libmachine: (addons-594533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:6b:36", ip: ""} in network mk-addons-594533: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:58 +0000 UTC Type:0 Mac:52:54:00:de:6b:36 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:addons-594533 Clientid:01:52:54:00:de:6b:36}
	I0906 23:39:08.120884   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined IP address 192.168.39.126 and MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:08.121110   14148 main.go:141] libmachine: Docker is up and running!
	I0906 23:39:08.121129   14148 main.go:141] libmachine: Reticulating splines...
	I0906 23:39:08.121135   14148 client.go:171] LocalClient.Create took 26.089451386s
	I0906 23:39:08.121155   14148 start.go:167] duration metric: libmachine.API.Create for "addons-594533" took 26.089546972s
	I0906 23:39:08.121167   14148 start.go:300] post-start starting for "addons-594533" (driver="kvm2")
	I0906 23:39:08.121178   14148 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 23:39:08.121199   14148 main.go:141] libmachine: (addons-594533) Calling .DriverName
	I0906 23:39:08.121407   14148 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 23:39:08.121443   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHHostname
	I0906 23:39:08.123533   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:08.123836   14148 main.go:141] libmachine: (addons-594533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:6b:36", ip: ""} in network mk-addons-594533: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:58 +0000 UTC Type:0 Mac:52:54:00:de:6b:36 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:addons-594533 Clientid:01:52:54:00:de:6b:36}
	I0906 23:39:08.123868   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined IP address 192.168.39.126 and MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:08.123990   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHPort
	I0906 23:39:08.124141   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHKeyPath
	I0906 23:39:08.124271   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHUsername
	I0906 23:39:08.124401   14148 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6521/.minikube/machines/addons-594533/id_rsa Username:docker}
	I0906 23:39:08.208274   14148 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 23:39:08.212959   14148 info.go:137] Remote host: Buildroot 2021.02.12
	I0906 23:39:08.212981   14148 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6521/.minikube/addons for local assets ...
	I0906 23:39:08.213052   14148 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-6521/.minikube/files for local assets ...
	I0906 23:39:08.213079   14148 start.go:303] post-start completed in 91.902512ms
	I0906 23:39:08.213110   14148 main.go:141] libmachine: (addons-594533) Calling .GetConfigRaw
	I0906 23:39:08.213712   14148 main.go:141] libmachine: (addons-594533) Calling .GetIP
	I0906 23:39:08.216396   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:08.216728   14148 main.go:141] libmachine: (addons-594533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:6b:36", ip: ""} in network mk-addons-594533: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:58 +0000 UTC Type:0 Mac:52:54:00:de:6b:36 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:addons-594533 Clientid:01:52:54:00:de:6b:36}
	I0906 23:39:08.216763   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined IP address 192.168.39.126 and MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:08.217002   14148 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/config.json ...
	I0906 23:39:08.217178   14148 start.go:128] duration metric: createHost completed in 26.201171402s
	I0906 23:39:08.217223   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHHostname
	I0906 23:39:08.219147   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:08.219466   14148 main.go:141] libmachine: (addons-594533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:6b:36", ip: ""} in network mk-addons-594533: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:58 +0000 UTC Type:0 Mac:52:54:00:de:6b:36 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:addons-594533 Clientid:01:52:54:00:de:6b:36}
	I0906 23:39:08.219493   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined IP address 192.168.39.126 and MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:08.219595   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHPort
	I0906 23:39:08.219772   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHKeyPath
	I0906 23:39:08.219896   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHKeyPath
	I0906 23:39:08.220035   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHUsername
	I0906 23:39:08.220167   14148 main.go:141] libmachine: Using SSH client type: native
	I0906 23:39:08.220567   14148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.126 22 <nil> <nil>}
	I0906 23:39:08.220579   14148 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0906 23:39:08.330643   14148 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694043548.314312364
	
	I0906 23:39:08.330676   14148 fix.go:206] guest clock: 1694043548.314312364
	I0906 23:39:08.330686   14148 fix.go:219] Guest: 2023-09-06 23:39:08.314312364 +0000 UTC Remote: 2023-09-06 23:39:08.2171892 +0000 UTC m=+26.293829787 (delta=97.123164ms)
	I0906 23:39:08.330725   14148 fix.go:190] guest clock delta is within tolerance: 97.123164ms
	I0906 23:39:08.330733   14148 start.go:83] releasing machines lock for "addons-594533", held for 26.314810032s
	I0906 23:39:08.330752   14148 main.go:141] libmachine: (addons-594533) Calling .DriverName
	I0906 23:39:08.330994   14148 main.go:141] libmachine: (addons-594533) Calling .GetIP
	I0906 23:39:08.333550   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:08.333873   14148 main.go:141] libmachine: (addons-594533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:6b:36", ip: ""} in network mk-addons-594533: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:58 +0000 UTC Type:0 Mac:52:54:00:de:6b:36 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:addons-594533 Clientid:01:52:54:00:de:6b:36}
	I0906 23:39:08.333903   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined IP address 192.168.39.126 and MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:08.334041   14148 main.go:141] libmachine: (addons-594533) Calling .DriverName
	I0906 23:39:08.334626   14148 main.go:141] libmachine: (addons-594533) Calling .DriverName
	I0906 23:39:08.334794   14148 main.go:141] libmachine: (addons-594533) Calling .DriverName
	I0906 23:39:08.334871   14148 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 23:39:08.334913   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHHostname
	I0906 23:39:08.335017   14148 ssh_runner.go:195] Run: cat /version.json
	I0906 23:39:08.335048   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHHostname
	I0906 23:39:08.337232   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:08.337332   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:08.337480   14148 main.go:141] libmachine: (addons-594533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:6b:36", ip: ""} in network mk-addons-594533: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:58 +0000 UTC Type:0 Mac:52:54:00:de:6b:36 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:addons-594533 Clientid:01:52:54:00:de:6b:36}
	I0906 23:39:08.337503   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined IP address 192.168.39.126 and MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:08.337649   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHPort
	I0906 23:39:08.337649   14148 main.go:141] libmachine: (addons-594533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:6b:36", ip: ""} in network mk-addons-594533: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:58 +0000 UTC Type:0 Mac:52:54:00:de:6b:36 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:addons-594533 Clientid:01:52:54:00:de:6b:36}
	I0906 23:39:08.337682   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined IP address 192.168.39.126 and MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:08.337815   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHKeyPath
	I0906 23:39:08.337832   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHPort
	I0906 23:39:08.338013   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHUsername
	I0906 23:39:08.338036   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHKeyPath
	I0906 23:39:08.338213   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHUsername
	I0906 23:39:08.338215   14148 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6521/.minikube/machines/addons-594533/id_rsa Username:docker}
	I0906 23:39:08.338346   14148 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6521/.minikube/machines/addons-594533/id_rsa Username:docker}
	I0906 23:39:08.414505   14148 ssh_runner.go:195] Run: systemctl --version
	I0906 23:39:08.442079   14148 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 23:39:08.447356   14148 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 23:39:08.447403   14148 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 23:39:08.462091   14148 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 23:39:08.462106   14148 start.go:466] detecting cgroup driver to use...
	I0906 23:39:08.462154   14148 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 23:39:08.498575   14148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 23:39:08.510344   14148 docker.go:196] disabling cri-docker service (if available) ...
	I0906 23:39:08.510390   14148 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 23:39:08.521551   14148 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 23:39:08.532662   14148 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 23:39:08.631563   14148 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 23:39:08.748443   14148 docker.go:212] disabling docker service ...
	I0906 23:39:08.748497   14148 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 23:39:08.761306   14148 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 23:39:08.772268   14148 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 23:39:08.889006   14148 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 23:39:08.998691   14148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 23:39:09.010344   14148 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 23:39:09.026170   14148 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0906 23:39:09.035677   14148 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 23:39:09.045031   14148 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 23:39:09.045075   14148 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 23:39:09.054159   14148 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 23:39:09.062486   14148 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 23:39:09.071841   14148 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 23:39:09.081318   14148 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 23:39:09.091033   14148 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 23:39:09.100945   14148 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 23:39:09.109601   14148 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 23:39:09.109658   14148 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 23:39:09.122912   14148 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 23:39:09.131763   14148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 23:39:09.240401   14148 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 23:39:09.270426   14148 start.go:513] Will wait 60s for socket path /run/containerd/containerd.sock
	I0906 23:39:09.270509   14148 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0906 23:39:09.274980   14148 retry.go:31] will retry after 1.333351612s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0906 23:39:10.609479   14148 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0906 23:39:10.614617   14148 start.go:534] Will wait 60s for crictl version
	I0906 23:39:10.614664   14148 ssh_runner.go:195] Run: which crictl
	I0906 23:39:10.618062   14148 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 23:39:10.645969   14148 start.go:550] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.3
	RuntimeApiVersion:  v1alpha2
	I0906 23:39:10.646044   14148 ssh_runner.go:195] Run: containerd --version
	I0906 23:39:10.675301   14148 ssh_runner.go:195] Run: containerd --version
	I0906 23:39:10.707104   14148 out.go:177] * Preparing Kubernetes v1.28.1 on containerd 1.7.3 ...
	I0906 23:39:10.708647   14148 main.go:141] libmachine: (addons-594533) Calling .GetIP
	I0906 23:39:10.711150   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:10.711516   14148 main.go:141] libmachine: (addons-594533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:6b:36", ip: ""} in network mk-addons-594533: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:58 +0000 UTC Type:0 Mac:52:54:00:de:6b:36 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:addons-594533 Clientid:01:52:54:00:de:6b:36}
	I0906 23:39:10.711543   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined IP address 192.168.39.126 and MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:10.711810   14148 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0906 23:39:10.715796   14148 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 23:39:10.728296   14148 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime containerd
	I0906 23:39:10.728344   14148 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 23:39:10.758069   14148 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0906 23:39:10.758132   14148 ssh_runner.go:195] Run: which lz4
	I0906 23:39:10.761730   14148 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0906 23:39:10.766111   14148 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 23:39:10.766135   14148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6521/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (456573247 bytes)
	I0906 23:39:12.547966   14148 containerd.go:547] Took 1.786257 seconds to copy over tarball
	I0906 23:39:12.548028   14148 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 23:39:15.524279   14148 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.976211216s)
	I0906 23:39:15.524314   14148 containerd.go:554] Took 2.976323 seconds to extract the tarball
	I0906 23:39:15.524326   14148 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 23:39:15.567404   14148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 23:39:15.671272   14148 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 23:39:15.692737   14148 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 23:39:16.725621   14148 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.032849753s)
	I0906 23:39:16.725755   14148 containerd.go:604] all images are preloaded for containerd runtime.
	I0906 23:39:16.725767   14148 cache_images.go:84] Images are preloaded, skipping loading
	I0906 23:39:16.725838   14148 ssh_runner.go:195] Run: sudo crictl info
	I0906 23:39:16.753266   14148 cni.go:84] Creating CNI manager for ""
	I0906 23:39:16.753291   14148 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0906 23:39:16.753310   14148 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 23:39:16.753326   14148 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.126 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-594533 NodeName:addons-594533 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.126"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.126 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 23:39:16.753478   14148 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.126
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-594533"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.126
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.126"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 23:39:16.753576   14148 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-594533 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.126
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:addons-594533 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0906 23:39:16.753626   14148 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0906 23:39:16.762158   14148 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 23:39:16.762233   14148 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 23:39:16.770366   14148 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I0906 23:39:16.786079   14148 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 23:39:16.801327   14148 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2108 bytes)
	I0906 23:39:16.816599   14148 ssh_runner.go:195] Run: grep 192.168.39.126	control-plane.minikube.internal$ /etc/hosts
	I0906 23:39:16.820026   14148 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.126	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 23:39:16.831576   14148 certs.go:56] Setting up /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533 for IP: 192.168.39.126
	I0906 23:39:16.831599   14148 certs.go:190] acquiring lock for shared ca certs: {Name:mka817faf056871640af89b49d7550f1171018c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:39:16.831740   14148 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17174-6521/.minikube/ca.key
	I0906 23:39:17.073468   14148 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17174-6521/.minikube/ca.crt ...
	I0906 23:39:17.073496   14148 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6521/.minikube/ca.crt: {Name:mkef8f56e550d5c17fd33f3786a5d267473f3fb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:39:17.073660   14148 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17174-6521/.minikube/ca.key ...
	I0906 23:39:17.073671   14148 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6521/.minikube/ca.key: {Name:mk5c1dd7a9d4a62257e6b94ea6164fbe7ff809f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:39:17.073743   14148 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17174-6521/.minikube/proxy-client-ca.key
	I0906 23:39:17.240008   14148 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17174-6521/.minikube/proxy-client-ca.crt ...
	I0906 23:39:17.240035   14148 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6521/.minikube/proxy-client-ca.crt: {Name:mk38ce6bbe5932f162c71810c703a860a4d07896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:39:17.240181   14148 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17174-6521/.minikube/proxy-client-ca.key ...
	I0906 23:39:17.240190   14148 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6521/.minikube/proxy-client-ca.key: {Name:mk935ba4c4c2e51c532ba4ee64c414273bae1842 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:39:17.240302   14148 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/client.key
	I0906 23:39:17.240315   14148 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/client.crt with IP's: []
	I0906 23:39:17.321659   14148 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/client.crt ...
	I0906 23:39:17.321686   14148 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/client.crt: {Name:mk59bf4ad9b105300ed0c392c4731c39192c2481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:39:17.321832   14148 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/client.key ...
	I0906 23:39:17.321842   14148 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/client.key: {Name:mk51d19fbba03e6bfd7d41dc844c1edfb93acf46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:39:17.321901   14148 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/apiserver.key.d2bad332
	I0906 23:39:17.321917   14148 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/apiserver.crt.d2bad332 with IP's: [192.168.39.126 10.96.0.1 127.0.0.1 10.0.0.1]
	I0906 23:39:17.416068   14148 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/apiserver.crt.d2bad332 ...
	I0906 23:39:17.416095   14148 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/apiserver.crt.d2bad332: {Name:mk0573aa37f6cc90db7a530828979de32b67048f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:39:17.416242   14148 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/apiserver.key.d2bad332 ...
	I0906 23:39:17.416253   14148 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/apiserver.key.d2bad332: {Name:mk957cfd3c0d7bc635ef58de8feb90610b4b3614 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:39:17.416321   14148 certs.go:337] copying /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/apiserver.crt.d2bad332 -> /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/apiserver.crt
	I0906 23:39:17.416410   14148 certs.go:341] copying /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/apiserver.key.d2bad332 -> /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/apiserver.key
	I0906 23:39:17.416457   14148 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/proxy-client.key
	I0906 23:39:17.416474   14148 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/proxy-client.crt with IP's: []
	I0906 23:39:17.751541   14148 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/proxy-client.crt ...
	I0906 23:39:17.751571   14148 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/proxy-client.crt: {Name:mkdcc64d7dddc34bee293b40c33b2074d9095db6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:39:17.751765   14148 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/proxy-client.key ...
	I0906 23:39:17.751781   14148 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/proxy-client.key: {Name:mk8e5819989768bfb4729cdf26ed14e42bef1846 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:39:17.751988   14148 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6521/.minikube/certs/home/jenkins/minikube-integration/17174-6521/.minikube/certs/ca-key.pem (1675 bytes)
	I0906 23:39:17.752033   14148 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6521/.minikube/certs/home/jenkins/minikube-integration/17174-6521/.minikube/certs/ca.pem (1082 bytes)
	I0906 23:39:17.752077   14148 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6521/.minikube/certs/home/jenkins/minikube-integration/17174-6521/.minikube/certs/cert.pem (1123 bytes)
	I0906 23:39:17.752111   14148 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-6521/.minikube/certs/home/jenkins/minikube-integration/17174-6521/.minikube/certs/key.pem (1675 bytes)
	I0906 23:39:17.752642   14148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0906 23:39:17.779512   14148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 23:39:17.805344   14148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 23:39:17.831300   14148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 23:39:17.856179   14148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6521/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 23:39:17.880399   14148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6521/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 23:39:17.903843   14148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6521/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 23:39:17.925177   14148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6521/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 23:39:17.946399   14148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-6521/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 23:39:17.967810   14148 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 23:39:17.982674   14148 ssh_runner.go:195] Run: openssl version
	I0906 23:39:17.987871   14148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 23:39:17.997033   14148 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 23:39:18.001346   14148 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0906 23:39:18.001399   14148 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 23:39:18.006843   14148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 23:39:18.016094   14148 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0906 23:39:18.020000   14148 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0906 23:39:18.020058   14148 kubeadm.go:404] StartCluster: {Name:addons-594533 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.1 ClusterName:addons-594533 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.126 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 23:39:18.020155   14148 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0906 23:39:18.020207   14148 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 23:39:18.048243   14148 cri.go:89] found id: ""
	I0906 23:39:18.048314   14148 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 23:39:18.056219   14148 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 23:39:18.063846   14148 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 23:39:18.071433   14148 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 23:39:18.071470   14148 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 23:39:18.253353   14148 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 23:39:30.450958   14148 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0906 23:39:30.451006   14148 kubeadm.go:322] [preflight] Running pre-flight checks
	I0906 23:39:30.451087   14148 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 23:39:30.451246   14148 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 23:39:30.451375   14148 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 23:39:30.451470   14148 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 23:39:30.452875   14148 out.go:204]   - Generating certificates and keys ...
	I0906 23:39:30.452971   14148 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0906 23:39:30.453055   14148 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0906 23:39:30.453164   14148 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0906 23:39:30.453241   14148 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0906 23:39:30.453328   14148 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0906 23:39:30.453392   14148 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0906 23:39:30.453485   14148 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0906 23:39:30.453602   14148 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-594533 localhost] and IPs [192.168.39.126 127.0.0.1 ::1]
	I0906 23:39:30.453648   14148 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0906 23:39:30.453777   14148 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-594533 localhost] and IPs [192.168.39.126 127.0.0.1 ::1]
	I0906 23:39:30.453832   14148 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0906 23:39:30.453885   14148 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0906 23:39:30.453931   14148 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0906 23:39:30.453981   14148 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 23:39:30.454041   14148 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 23:39:30.454092   14148 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 23:39:30.454147   14148 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 23:39:30.454211   14148 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 23:39:30.454298   14148 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 23:39:30.454362   14148 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 23:39:30.455683   14148 out.go:204]   - Booting up control plane ...
	I0906 23:39:30.455777   14148 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 23:39:30.455841   14148 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 23:39:30.455914   14148 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 23:39:30.456027   14148 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 23:39:30.456161   14148 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 23:39:30.456220   14148 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0906 23:39:30.456394   14148 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 23:39:30.456464   14148 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002971 seconds
	I0906 23:39:30.456595   14148 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 23:39:30.456726   14148 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 23:39:30.456813   14148 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 23:39:30.457016   14148 kubeadm.go:322] [mark-control-plane] Marking the node addons-594533 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 23:39:30.457081   14148 kubeadm.go:322] [bootstrap-token] Using token: e7eeeo.18zxtt90rwe5ebzy
	I0906 23:39:30.458705   14148 out.go:204]   - Configuring RBAC rules ...
	I0906 23:39:30.458820   14148 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 23:39:30.458934   14148 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 23:39:30.459131   14148 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 23:39:30.459296   14148 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 23:39:30.459463   14148 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 23:39:30.459541   14148 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 23:39:30.459704   14148 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 23:39:30.459775   14148 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0906 23:39:30.459855   14148 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0906 23:39:30.459869   14148 kubeadm.go:322] 
	I0906 23:39:30.459965   14148 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0906 23:39:30.459975   14148 kubeadm.go:322] 
	I0906 23:39:30.460066   14148 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0906 23:39:30.460078   14148 kubeadm.go:322] 
	I0906 23:39:30.460099   14148 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0906 23:39:30.460151   14148 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 23:39:30.460214   14148 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 23:39:30.460221   14148 kubeadm.go:322] 
	I0906 23:39:30.460295   14148 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0906 23:39:30.460306   14148 kubeadm.go:322] 
	I0906 23:39:30.460398   14148 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 23:39:30.460408   14148 kubeadm.go:322] 
	I0906 23:39:30.460488   14148 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0906 23:39:30.460595   14148 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 23:39:30.460687   14148 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 23:39:30.460698   14148 kubeadm.go:322] 
	I0906 23:39:30.460811   14148 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 23:39:30.460936   14148 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0906 23:39:30.460947   14148 kubeadm.go:322] 
	I0906 23:39:30.461053   14148 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token e7eeeo.18zxtt90rwe5ebzy \
	I0906 23:39:30.461184   14148 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:d6db475204d931fa7d8669e44953436b8a189c8c3cfce7f6e1a74976ad5c2949 \
	I0906 23:39:30.461214   14148 kubeadm.go:322] 	--control-plane 
	I0906 23:39:30.461222   14148 kubeadm.go:322] 
	I0906 23:39:30.461329   14148 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0906 23:39:30.461342   14148 kubeadm.go:322] 
	I0906 23:39:30.461451   14148 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token e7eeeo.18zxtt90rwe5ebzy \
	I0906 23:39:30.461609   14148 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:d6db475204d931fa7d8669e44953436b8a189c8c3cfce7f6e1a74976ad5c2949 
	I0906 23:39:30.461627   14148 cni.go:84] Creating CNI manager for ""
	I0906 23:39:30.461637   14148 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0906 23:39:30.463186   14148 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 23:39:30.464508   14148 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 23:39:30.475996   14148 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0906 23:39:30.510612   14148 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 23:39:30.510696   14148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:30.510696   14148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cf47a38f14700a28a638c18f21764b75f0a296b2 minikube.k8s.io/name=addons-594533 minikube.k8s.io/updated_at=2023_09_06T23_39_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:30.581585   14148 ops.go:34] apiserver oom_adj: -16
	I0906 23:39:30.778812   14148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:30.870437   14148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:31.473674   14148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:31.974077   14148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:32.473871   14148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:32.973265   14148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:33.474121   14148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:33.974127   14148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:34.473765   14148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:34.973915   14148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:35.473811   14148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:35.973364   14148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:36.473770   14148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:36.973681   14148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:37.474084   14148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:37.973085   14148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:38.474078   14148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:38.973339   14148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:39.473144   14148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:39.973257   14148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:40.473069   14148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:40.973414   14148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:41.473762   14148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:41.973084   14148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:42.473199   14148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:42.973998   14148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:43.473745   14148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:39:43.565676   14148 kubeadm.go:1081] duration metric: took 13.055038047s to wait for elevateKubeSystemPrivileges.
	I0906 23:39:43.565708   14148 kubeadm.go:406] StartCluster complete in 25.545657686s
	I0906 23:39:43.565728   14148 settings.go:142] acquiring lock: {Name:mka6ec81ed8deb1244f435c6bbd477c0786ad68d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:39:43.565846   14148 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17174-6521/kubeconfig
	I0906 23:39:43.566235   14148 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6521/kubeconfig: {Name:mk6a7f07e519e34b67fceb9aea9a0322fef77b19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:39:43.566420   14148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 23:39:43.566462   14148 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0906 23:39:43.566549   14148 addons.go:69] Setting volumesnapshots=true in profile "addons-594533"
	I0906 23:39:43.566558   14148 addons.go:69] Setting cloud-spanner=true in profile "addons-594533"
	I0906 23:39:43.566563   14148 addons.go:231] Setting addon volumesnapshots=true in "addons-594533"
	I0906 23:39:43.566576   14148 addons.go:231] Setting addon cloud-spanner=true in "addons-594533"
	I0906 23:39:43.566577   14148 addons.go:69] Setting metrics-server=true in profile "addons-594533"
	I0906 23:39:43.566594   14148 addons.go:231] Setting addon metrics-server=true in "addons-594533"
	I0906 23:39:43.566617   14148 host.go:66] Checking if "addons-594533" exists ...
	I0906 23:39:43.566607   14148 addons.go:69] Setting default-storageclass=true in profile "addons-594533"
	I0906 23:39:43.566636   14148 host.go:66] Checking if "addons-594533" exists ...
	I0906 23:39:43.566640   14148 config.go:182] Loaded profile config "addons-594533": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
	I0906 23:39:43.566627   14148 addons.go:69] Setting ingress-dns=true in profile "addons-594533"
	I0906 23:39:43.566648   14148 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-594533"
	I0906 23:39:43.566667   14148 addons.go:231] Setting addon ingress-dns=true in "addons-594533"
	I0906 23:39:43.566670   14148 host.go:66] Checking if "addons-594533" exists ...
	I0906 23:39:43.566684   14148 addons.go:69] Setting inspektor-gadget=true in profile "addons-594533"
	I0906 23:39:43.566698   14148 addons.go:231] Setting addon inspektor-gadget=true in "addons-594533"
	I0906 23:39:43.566724   14148 host.go:66] Checking if "addons-594533" exists ...
	I0906 23:39:43.566732   14148 host.go:66] Checking if "addons-594533" exists ...
	I0906 23:39:43.566850   14148 addons.go:69] Setting registry=true in profile "addons-594533"
	I0906 23:39:43.566888   14148 addons.go:231] Setting addon registry=true in "addons-594533"
	I0906 23:39:43.566931   14148 host.go:66] Checking if "addons-594533" exists ...
	I0906 23:39:43.567082   14148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0906 23:39:43.567091   14148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0906 23:39:43.567097   14148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0906 23:39:43.567106   14148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:43.567113   14148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:43.567154   14148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0906 23:39:43.567170   14148 addons.go:69] Setting gcp-auth=true in profile "addons-594533"
	I0906 23:39:43.567187   14148 mustload.go:65] Loading cluster: addons-594533
	I0906 23:39:43.567198   14148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:43.567298   14148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:43.567325   14148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0906 23:39:43.567343   14148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:43.567374   14148 config.go:182] Loaded profile config "addons-594533": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
	I0906 23:39:43.567377   14148 addons.go:69] Setting storage-provisioner=true in profile "addons-594533"
	I0906 23:39:43.567389   14148 addons.go:231] Setting addon storage-provisioner=true in "addons-594533"
	I0906 23:39:43.567424   14148 host.go:66] Checking if "addons-594533" exists ...
	I0906 23:39:43.567431   14148 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-594533"
	I0906 23:39:43.567475   14148 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-594533"
	I0906 23:39:43.567531   14148 host.go:66] Checking if "addons-594533" exists ...
	I0906 23:39:43.567702   14148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0906 23:39:43.567725   14148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:43.567772   14148 addons.go:69] Setting helm-tiller=true in profile "addons-594533"
	I0906 23:39:43.567786   14148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0906 23:39:43.567786   14148 addons.go:231] Setting addon helm-tiller=true in "addons-594533"
	I0906 23:39:43.567811   14148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:43.567874   14148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0906 23:39:43.567900   14148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:43.566550   14148 addons.go:69] Setting ingress=true in profile "addons-594533"
	I0906 23:39:43.568048   14148 addons.go:231] Setting addon ingress=true in "addons-594533"
	I0906 23:39:43.568087   14148 host.go:66] Checking if "addons-594533" exists ...
	I0906 23:39:43.568089   14148 host.go:66] Checking if "addons-594533" exists ...
	I0906 23:39:43.568110   14148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0906 23:39:43.568142   14148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:43.568430   14148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0906 23:39:43.568432   14148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0906 23:39:43.568458   14148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:43.568468   14148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:43.568485   14148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0906 23:39:43.568515   14148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:43.584527   14148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41875
	I0906 23:39:43.584945   14148 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:43.585504   14148 main.go:141] libmachine: Using API Version  1
	I0906 23:39:43.585542   14148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:43.585822   14148 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:43.586381   14148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0906 23:39:43.586417   14148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:43.587081   14148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41299
	I0906 23:39:43.587313   14148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39437
	I0906 23:39:43.590532   14148 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:43.590720   14148 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:43.591089   14148 main.go:141] libmachine: Using API Version  1
	I0906 23:39:43.591115   14148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:43.591487   14148 main.go:141] libmachine: Using API Version  1
	I0906 23:39:43.591505   14148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:43.591518   14148 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:43.592082   14148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0906 23:39:43.592111   14148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:43.592327   14148 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:43.592821   14148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0906 23:39:43.592844   14148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:43.609506   14148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36027
	I0906 23:39:43.609728   14148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39063
	I0906 23:39:43.609827   14148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44607
	I0906 23:39:43.610212   14148 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:43.610325   14148 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:43.610842   14148 main.go:141] libmachine: Using API Version  1
	I0906 23:39:43.610860   14148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:43.611050   14148 main.go:141] libmachine: Using API Version  1
	I0906 23:39:43.611064   14148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:43.611171   14148 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:43.611545   14148 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:43.612169   14148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0906 23:39:43.612196   14148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:43.612329   14148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45013
	I0906 23:39:43.612371   14148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45191
	I0906 23:39:43.612675   14148 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:43.612735   14148 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:43.612962   14148 main.go:141] libmachine: Using API Version  1
	I0906 23:39:43.612980   14148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:43.613147   14148 main.go:141] libmachine: Using API Version  1
	I0906 23:39:43.613165   14148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:43.613220   14148 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:43.613509   14148 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:43.613534   14148 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:43.613681   14148 main.go:141] libmachine: Using API Version  1
	I0906 23:39:43.613701   14148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:43.613714   14148 main.go:141] libmachine: (addons-594533) Calling .GetState
	I0906 23:39:43.613920   14148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0906 23:39:43.613962   14148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:43.614159   14148 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:43.614705   14148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0906 23:39:43.614728   14148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:43.614757   14148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0906 23:39:43.614788   14148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:43.615410   14148 host.go:66] Checking if "addons-594533" exists ...
	I0906 23:39:43.615753   14148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0906 23:39:43.615772   14148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:43.632357   14148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43873
	I0906 23:39:43.633325   14148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40795
	I0906 23:39:43.633487   14148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44149
	I0906 23:39:43.633503   14148 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:43.633781   14148 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:43.633880   14148 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:43.634130   14148 main.go:141] libmachine: Using API Version  1
	I0906 23:39:43.634147   14148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:43.634312   14148 main.go:141] libmachine: Using API Version  1
	I0906 23:39:43.634338   14148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:43.634385   14148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46193
	I0906 23:39:43.634503   14148 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:43.634749   14148 main.go:141] libmachine: Using API Version  1
	I0906 23:39:43.634765   14148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:43.634807   14148 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:43.635200   14148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0906 23:39:43.635231   14148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:43.635605   14148 main.go:141] libmachine: (addons-594533) Calling .GetState
	I0906 23:39:43.635608   14148 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:43.636021   14148 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:43.636093   14148 main.go:141] libmachine: Using API Version  1
	I0906 23:39:43.636115   14148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:43.636650   14148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0906 23:39:43.636691   14148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:43.637511   14148 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:43.638036   14148 main.go:141] libmachine: (addons-594533) Calling .DriverName
	I0906 23:39:43.638268   14148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36375
	I0906 23:39:43.640079   14148 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.9
	I0906 23:39:43.638620   14148 main.go:141] libmachine: (addons-594533) Calling .GetState
	I0906 23:39:43.638717   14148 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:43.639160   14148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40511
	I0906 23:39:43.641694   14148 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0906 23:39:43.641713   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0906 23:39:43.641732   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHHostname
	I0906 23:39:43.642518   14148 main.go:141] libmachine: Using API Version  1
	I0906 23:39:43.642535   14148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:43.642820   14148 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:43.643028   14148 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:43.643117   14148 main.go:141] libmachine: (addons-594533) Calling .GetState
	I0906 23:39:43.644662   14148 main.go:141] libmachine: Using API Version  1
	I0906 23:39:43.644682   14148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:43.645141   14148 main.go:141] libmachine: (addons-594533) Calling .DriverName
	I0906 23:39:43.645200   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:43.647740   14148 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0906 23:39:43.645638   14148 main.go:141] libmachine: (addons-594533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:6b:36", ip: ""} in network mk-addons-594533: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:58 +0000 UTC Type:0 Mac:52:54:00:de:6b:36 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:addons-594533 Clientid:01:52:54:00:de:6b:36}
	I0906 23:39:43.645781   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHPort
	I0906 23:39:43.645807   14148 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:43.650953   14148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34103
	I0906 23:39:43.651440   14148 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 23:39:43.651454   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 23:39:43.651478   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHHostname
	I0906 23:39:43.651547   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined IP address 192.168.39.126 and MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:43.651793   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHKeyPath
	I0906 23:39:43.652200   14148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0906 23:39:43.652246   14148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:43.652253   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHUsername
	I0906 23:39:43.652272   14148 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:43.652731   14148 main.go:141] libmachine: Using API Version  1
	I0906 23:39:43.652752   14148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:43.652760   14148 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6521/.minikube/machines/addons-594533/id_rsa Username:docker}
	I0906 23:39:43.653451   14148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46267
	I0906 23:39:43.654136   14148 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:43.654314   14148 main.go:141] libmachine: (addons-594533) Calling .GetState
	I0906 23:39:43.654381   14148 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:43.655309   14148 main.go:141] libmachine: Using API Version  1
	I0906 23:39:43.655329   14148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:43.656044   14148 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:43.656366   14148 main.go:141] libmachine: (addons-594533) Calling .GetState
	I0906 23:39:43.656480   14148 main.go:141] libmachine: (addons-594533) Calling .DriverName
	I0906 23:39:43.658487   14148 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0906 23:39:43.657288   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:43.658443   14148 main.go:141] libmachine: (addons-594533) Calling .DriverName
	I0906 23:39:43.658468   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHPort
	I0906 23:39:43.659808   14148 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0906 23:39:43.659819   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0906 23:39:43.659836   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHHostname
	I0906 23:39:43.659882   14148 main.go:141] libmachine: (addons-594533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:6b:36", ip: ""} in network mk-addons-594533: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:58 +0000 UTC Type:0 Mac:52:54:00:de:6b:36 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:addons-594533 Clientid:01:52:54:00:de:6b:36}
	I0906 23:39:43.659906   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined IP address 192.168.39.126 and MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:43.659978   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHKeyPath
	I0906 23:39:43.662170   14148 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.20.0
	I0906 23:39:43.660794   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHUsername
	I0906 23:39:43.662307   14148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41771
	I0906 23:39:43.663790   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:43.663977   14148 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0906 23:39:43.663987   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0906 23:39:43.664007   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHHostname
	I0906 23:39:43.664292   14148 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6521/.minikube/machines/addons-594533/id_rsa Username:docker}
	I0906 23:39:43.670930   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHPort
	I0906 23:39:43.671055   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHPort
	I0906 23:39:43.671292   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:43.671317   14148 main.go:141] libmachine: (addons-594533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:6b:36", ip: ""} in network mk-addons-594533: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:58 +0000 UTC Type:0 Mac:52:54:00:de:6b:36 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:addons-594533 Clientid:01:52:54:00:de:6b:36}
	I0906 23:39:43.671349   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined IP address 192.168.39.126 and MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:43.671621   14148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35857
	I0906 23:39:43.671774   14148 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:43.672344   14148 main.go:141] libmachine: Using API Version  1
	I0906 23:39:43.672360   14148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:43.672481   14148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45639
	I0906 23:39:43.672587   14148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36567
	I0906 23:39:43.672758   14148 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:43.672956   14148 main.go:141] libmachine: (addons-594533) Calling .GetState
	I0906 23:39:43.673032   14148 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:43.673225   14148 main.go:141] libmachine: (addons-594533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:6b:36", ip: ""} in network mk-addons-594533: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:58 +0000 UTC Type:0 Mac:52:54:00:de:6b:36 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:addons-594533 Clientid:01:52:54:00:de:6b:36}
	I0906 23:39:43.673257   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined IP address 192.168.39.126 and MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:43.674080   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHKeyPath
	I0906 23:39:43.674224   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHKeyPath
	I0906 23:39:43.674257   14148 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:43.675043   14148 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:43.675105   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHUsername
	I0906 23:39:43.675138   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHUsername
	I0906 23:39:43.675153   14148 main.go:141] libmachine: (addons-594533) Calling .DriverName
	I0906 23:39:43.675381   14148 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6521/.minikube/machines/addons-594533/id_rsa Username:docker}
	I0906 23:39:43.676962   14148 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0906 23:39:43.675621   14148 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6521/.minikube/machines/addons-594533/id_rsa Username:docker}
	I0906 23:39:43.675805   14148 main.go:141] libmachine: Using API Version  1
	I0906 23:39:43.676088   14148 main.go:141] libmachine: Using API Version  1
	I0906 23:39:43.676334   14148 main.go:141] libmachine: Using API Version  1
	I0906 23:39:43.678176   14148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:43.679466   14148 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0906 23:39:43.678260   14148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:43.678279   14148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:43.678514   14148 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:43.679033   14148 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-594533" context rescaled to 1 replicas
	I0906 23:39:43.679837   14148 addons.go:231] Setting addon default-storageclass=true in "addons-594533"
	I0906 23:39:43.680738   14148 host.go:66] Checking if "addons-594533" exists ...
	I0906 23:39:43.680976   14148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0906 23:39:43.680999   14148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:43.682660   14148 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0906 23:39:43.681302   14148 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.126 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0906 23:39:43.681351   14148 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:43.681553   14148 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:43.681586   14148 main.go:141] libmachine: (addons-594533) Calling .DriverName
	I0906 23:39:43.685590   14148 out.go:177] * Verifying Kubernetes components...
	I0906 23:39:43.685457   14148 main.go:141] libmachine: (addons-594533) Calling .GetState
	I0906 23:39:43.685494   14148 main.go:141] libmachine: (addons-594533) Calling .GetState
	I0906 23:39:43.686867   14148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 23:39:43.686874   14148 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0906 23:39:43.687024   14148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34927
	I0906 23:39:43.689635   14148 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0906 23:39:43.688666   14148 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:43.688813   14148 main.go:141] libmachine: (addons-594533) Calling .DriverName
	I0906 23:39:43.688879   14148 main.go:141] libmachine: (addons-594533) Calling .DriverName
	I0906 23:39:43.690996   14148 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0906 23:39:43.690234   14148 main.go:141] libmachine: Using API Version  1
	I0906 23:39:43.692156   14148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44489
	I0906 23:39:43.692196   14148 out.go:177]   - Using image docker.io/registry:2.8.1
	I0906 23:39:43.692211   14148 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 23:39:43.692226   14148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:43.694924   14148 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0906 23:39:43.693925   14148 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:43.693978   14148 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:43.695837   14148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35287
	I0906 23:39:43.696141   14148 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0906 23:39:43.696283   14148 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 23:39:43.696749   14148 main.go:141] libmachine: Using API Version  1
	I0906 23:39:43.697441   14148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:43.697575   14148 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I0906 23:39:43.697208   14148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32881
	I0906 23:39:43.697591   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 23:39:43.696784   14148 main.go:141] libmachine: (addons-594533) Calling .GetState
	I0906 23:39:43.697805   14148 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:43.697836   14148 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:43.698895   14148 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0906 23:39:43.700401   14148 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0906 23:39:43.700415   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0906 23:39:43.700428   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHHostname
	I0906 23:39:43.698908   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0906 23:39:43.700474   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHHostname
	I0906 23:39:43.698909   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHHostname
	I0906 23:39:43.699411   14148 main.go:141] libmachine: Using API Version  1
	I0906 23:39:43.700542   14148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:43.699443   14148 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:43.699490   14148 main.go:141] libmachine: (addons-594533) Calling .GetState
	I0906 23:39:43.701035   14148 main.go:141] libmachine: (addons-594533) Calling .DriverName
	I0906 23:39:43.701108   14148 main.go:141] libmachine: Using API Version  1
	I0906 23:39:43.701124   14148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:43.704174   14148 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0906 23:39:43.701902   14148 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:43.704144   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:43.704986   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHPort
	I0906 23:39:43.705775   14148 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0906 23:39:43.705789   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0906 23:39:43.705806   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHHostname
	I0906 23:39:43.705877   14148 main.go:141] libmachine: (addons-594533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:6b:36", ip: ""} in network mk-addons-594533: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:58 +0000 UTC Type:0 Mac:52:54:00:de:6b:36 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:addons-594533 Clientid:01:52:54:00:de:6b:36}
	I0906 23:39:43.705903   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined IP address 192.168.39.126 and MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:43.705945   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHKeyPath
	I0906 23:39:43.706465   14148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0906 23:39:43.706502   14148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:43.706737   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHUsername
	I0906 23:39:43.706813   14148 main.go:141] libmachine: (addons-594533) Calling .DriverName
	I0906 23:39:43.706929   14148 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6521/.minikube/machines/addons-594533/id_rsa Username:docker}
	I0906 23:39:43.708762   14148 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0906 23:39:43.707613   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:43.707703   14148 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:43.707918   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:43.708302   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHPort
	I0906 23:39:43.708707   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHPort
	I0906 23:39:43.709637   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:43.710084   14148 main.go:141] libmachine: (addons-594533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:6b:36", ip: ""} in network mk-addons-594533: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:58 +0000 UTC Type:0 Mac:52:54:00:de:6b:36 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:addons-594533 Clientid:01:52:54:00:de:6b:36}
	I0906 23:39:43.710105   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined IP address 192.168.39.126 and MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:43.710183   14148 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0906 23:39:43.710192   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0906 23:39:43.710208   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHHostname
	I0906 23:39:43.710208   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHPort
	I0906 23:39:43.710273   14148 main.go:141] libmachine: (addons-594533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:6b:36", ip: ""} in network mk-addons-594533: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:58 +0000 UTC Type:0 Mac:52:54:00:de:6b:36 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:addons-594533 Clientid:01:52:54:00:de:6b:36}
	I0906 23:39:43.710294   14148 main.go:141] libmachine: (addons-594533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:6b:36", ip: ""} in network mk-addons-594533: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:58 +0000 UTC Type:0 Mac:52:54:00:de:6b:36 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:addons-594533 Clientid:01:52:54:00:de:6b:36}
	I0906 23:39:43.710307   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined IP address 192.168.39.126 and MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:43.710326   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined IP address 192.168.39.126 and MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:43.710338   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHKeyPath
	I0906 23:39:43.710857   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHUsername
	I0906 23:39:43.710916   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHKeyPath
	I0906 23:39:43.710951   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHKeyPath
	I0906 23:39:43.710987   14148 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6521/.minikube/machines/addons-594533/id_rsa Username:docker}
	I0906 23:39:43.711646   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHUsername
	I0906 23:39:43.711817   14148 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6521/.minikube/machines/addons-594533/id_rsa Username:docker}
	I0906 23:39:43.711824   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHUsername
	I0906 23:39:43.711976   14148 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6521/.minikube/machines/addons-594533/id_rsa Username:docker}
	I0906 23:39:43.712875   14148 main.go:141] libmachine: (addons-594533) Calling .GetState
	I0906 23:39:43.714952   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:43.715000   14148 main.go:141] libmachine: (addons-594533) Calling .DriverName
	I0906 23:39:43.717022   14148 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.1
	I0906 23:39:43.715731   14148 main.go:141] libmachine: (addons-594533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:6b:36", ip: ""} in network mk-addons-594533: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:58 +0000 UTC Type:0 Mac:52:54:00:de:6b:36 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:addons-594533 Clientid:01:52:54:00:de:6b:36}
	I0906 23:39:43.715982   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHPort
	I0906 23:39:43.720713   14148 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0906 23:39:43.718832   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined IP address 192.168.39.126 and MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:43.719031   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHKeyPath
	I0906 23:39:43.720958   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHUsername
	I0906 23:39:43.722634   14148 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0906 23:39:43.723888   14148 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0906 23:39:43.723906   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0906 23:39:43.723924   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHHostname
	I0906 23:39:43.722833   14148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34085
	I0906 23:39:43.722860   14148 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6521/.minikube/machines/addons-594533/id_rsa Username:docker}
	I0906 23:39:43.725018   14148 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:43.725479   14148 main.go:141] libmachine: Using API Version  1
	I0906 23:39:43.725495   14148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:43.725861   14148 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:43.726069   14148 main.go:141] libmachine: (addons-594533) Calling .GetState
	I0906 23:39:43.727473   14148 main.go:141] libmachine: (addons-594533) Calling .DriverName
	I0906 23:39:43.727693   14148 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 23:39:43.727706   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 23:39:43.727721   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHHostname
	I0906 23:39:43.727779   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:43.728337   14148 main.go:141] libmachine: (addons-594533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:6b:36", ip: ""} in network mk-addons-594533: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:58 +0000 UTC Type:0 Mac:52:54:00:de:6b:36 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:addons-594533 Clientid:01:52:54:00:de:6b:36}
	I0906 23:39:43.728367   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined IP address 192.168.39.126 and MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:43.728859   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHPort
	I0906 23:39:43.729454   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHKeyPath
	I0906 23:39:43.729815   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHUsername
	I0906 23:39:43.729960   14148 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6521/.minikube/machines/addons-594533/id_rsa Username:docker}
	I0906 23:39:43.730642   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:43.731119   14148 main.go:141] libmachine: (addons-594533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:6b:36", ip: ""} in network mk-addons-594533: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:58 +0000 UTC Type:0 Mac:52:54:00:de:6b:36 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:addons-594533 Clientid:01:52:54:00:de:6b:36}
	I0906 23:39:43.731175   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined IP address 192.168.39.126 and MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:43.731286   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHPort
	I0906 23:39:43.731405   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHKeyPath
	I0906 23:39:43.731549   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHUsername
	I0906 23:39:43.731635   14148 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6521/.minikube/machines/addons-594533/id_rsa Username:docker}
	W0906 23:39:43.733310   14148 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:33710->192.168.39.126:22: read: connection reset by peer
	I0906 23:39:43.733333   14148 retry.go:31] will retry after 144.097456ms: ssh: handshake failed: read tcp 192.168.39.1:33710->192.168.39.126:22: read: connection reset by peer
	I0906 23:39:44.052194   14148 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0906 23:39:44.052216   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0906 23:39:44.078400   14148 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 23:39:44.078421   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0906 23:39:44.116180   14148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0906 23:39:44.124203   14148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0906 23:39:44.127499   14148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0906 23:39:44.127922   14148 node_ready.go:35] waiting up to 6m0s for node "addons-594533" to be "Ready" ...
	I0906 23:39:44.131633   14148 node_ready.go:49] node "addons-594533" has status "Ready":"True"
	I0906 23:39:44.131652   14148 node_ready.go:38] duration metric: took 3.710113ms waiting for node "addons-594533" to be "Ready" ...
	I0906 23:39:44.131661   14148 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 23:39:44.139574   14148 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5s56s" in "kube-system" namespace to be "Ready" ...
	I0906 23:39:44.148737   14148 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0906 23:39:44.148756   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0906 23:39:44.182824   14148 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0906 23:39:44.182842   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0906 23:39:44.226340   14148 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0906 23:39:44.226361   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0906 23:39:44.258537   14148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 23:39:44.260243   14148 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0906 23:39:44.260280   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0906 23:39:44.293722   14148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0906 23:39:44.296755   14148 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0906 23:39:44.296774   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0906 23:39:44.410079   14148 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I0906 23:39:44.410104   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0906 23:39:44.410344   14148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 23:39:44.412014   14148 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 23:39:44.412030   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 23:39:44.437545   14148 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0906 23:39:44.437565   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0906 23:39:44.503437   14148 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0906 23:39:44.503459   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0906 23:39:44.620245   14148 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0906 23:39:44.620281   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0906 23:39:44.623193   14148 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0906 23:39:44.623214   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0906 23:39:44.657923   14148 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0906 23:39:44.657947   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0906 23:39:44.704311   14148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0906 23:39:44.795428   14148 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 23:39:44.795451   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 23:39:44.807313   14148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0906 23:39:44.819388   14148 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0906 23:39:44.819410   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0906 23:39:44.930115   14148 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0906 23:39:44.930138   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0906 23:39:44.955422   14148 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0906 23:39:44.955440   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0906 23:39:45.054665   14148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 23:39:45.149438   14148 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0906 23:39:45.149469   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0906 23:39:45.334755   14148 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 23:39:45.334773   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0906 23:39:45.347098   14148 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0906 23:39:45.347129   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0906 23:39:45.420553   14148 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0906 23:39:45.420577   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0906 23:39:45.649054   14148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 23:39:45.819078   14148 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0906 23:39:45.819098   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0906 23:39:45.843605   14148 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0906 23:39:45.843628   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0906 23:39:46.022066   14148 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0906 23:39:46.022090   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0906 23:39:46.116096   14148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0906 23:39:46.180690   14148 pod_ready.go:97] error getting pod "coredns-5dd5756b68-5s56s" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-5s56s" not found
	I0906 23:39:46.180714   14148 pod_ready.go:81] duration metric: took 2.041118583s waiting for pod "coredns-5dd5756b68-5s56s" in "kube-system" namespace to be "Ready" ...
	E0906 23:39:46.180725   14148 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-5s56s" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-5s56s" not found
	I0906 23:39:46.180732   14148 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6rrnk" in "kube-system" namespace to be "Ready" ...
	I0906 23:39:46.213063   14148 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0906 23:39:46.213084   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0906 23:39:46.626557   14148 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0906 23:39:46.626579   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0906 23:39:46.850084   14148 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0906 23:39:46.850105   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0906 23:39:46.996894   14148 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0906 23:39:46.996925   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0906 23:39:47.136112   14148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0906 23:39:48.201223   14148 pod_ready.go:102] pod "coredns-5dd5756b68-6rrnk" in "kube-system" namespace has status "Ready":"False"
	I0906 23:39:48.869301   14148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.753088337s)
	I0906 23:39:48.869332   14148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.745104744s)
	I0906 23:39:48.869357   14148 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:48.869372   14148 main.go:141] libmachine: (addons-594533) Calling .Close
	I0906 23:39:48.869376   14148 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.741844916s)
	I0906 23:39:48.869397   14148 start.go:901] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0906 23:39:48.869360   14148 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:48.869452   14148 main.go:141] libmachine: (addons-594533) Calling .Close
	I0906 23:39:48.869742   14148 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:48.869760   14148 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:48.869770   14148 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:48.869777   14148 main.go:141] libmachine: (addons-594533) DBG | Closing plugin on server side
	I0906 23:39:48.869780   14148 main.go:141] libmachine: (addons-594533) Calling .Close
	I0906 23:39:48.870066   14148 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:48.870127   14148 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:48.870094   14148 main.go:141] libmachine: (addons-594533) DBG | Closing plugin on server side
	I0906 23:39:48.870292   14148 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:48.870310   14148 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:48.870320   14148 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:48.870322   14148 main.go:141] libmachine: (addons-594533) DBG | Closing plugin on server side
	I0906 23:39:48.870329   14148 main.go:141] libmachine: (addons-594533) Calling .Close
	I0906 23:39:48.870507   14148 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:48.870520   14148 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:50.201366   14148 pod_ready.go:102] pod "coredns-5dd5756b68-6rrnk" in "kube-system" namespace has status "Ready":"False"
	I0906 23:39:50.310990   14148 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0906 23:39:50.311103   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHHostname
	I0906 23:39:50.314485   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:50.314966   14148 main.go:141] libmachine: (addons-594533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:6b:36", ip: ""} in network mk-addons-594533: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:58 +0000 UTC Type:0 Mac:52:54:00:de:6b:36 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:addons-594533 Clientid:01:52:54:00:de:6b:36}
	I0906 23:39:50.315001   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined IP address 192.168.39.126 and MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:50.315198   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHPort
	I0906 23:39:50.315411   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHKeyPath
	I0906 23:39:50.315604   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHUsername
	I0906 23:39:50.315762   14148 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6521/.minikube/machines/addons-594533/id_rsa Username:docker}
	I0906 23:39:50.491039   14148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.232466319s)
	I0906 23:39:50.491082   14148 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:50.491096   14148 main.go:141] libmachine: (addons-594533) Calling .Close
	I0906 23:39:50.491451   14148 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:50.491471   14148 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:50.491502   14148 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:50.491512   14148 main.go:141] libmachine: (addons-594533) Calling .Close
	I0906 23:39:50.491787   14148 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:50.491803   14148 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:50.860944   14148 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0906 23:39:51.075910   14148 addons.go:231] Setting addon gcp-auth=true in "addons-594533"
	I0906 23:39:51.075965   14148 host.go:66] Checking if "addons-594533" exists ...
	I0906 23:39:51.076308   14148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0906 23:39:51.076356   14148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:51.090642   14148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41853
	I0906 23:39:51.091046   14148 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:51.091493   14148 main.go:141] libmachine: Using API Version  1
	I0906 23:39:51.091516   14148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:51.091814   14148 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:51.092377   14148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0906 23:39:51.092420   14148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:39:51.106011   14148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40583
	I0906 23:39:51.106452   14148 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:39:51.106972   14148 main.go:141] libmachine: Using API Version  1
	I0906 23:39:51.107001   14148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:39:51.107359   14148 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:39:51.107561   14148 main.go:141] libmachine: (addons-594533) Calling .GetState
	I0906 23:39:51.109058   14148 main.go:141] libmachine: (addons-594533) Calling .DriverName
	I0906 23:39:51.109288   14148 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0906 23:39:51.109319   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHHostname
	I0906 23:39:51.111807   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:51.112180   14148 main.go:141] libmachine: (addons-594533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:6b:36", ip: ""} in network mk-addons-594533: {Iface:virbr1 ExpiryTime:2023-09-07 00:38:58 +0000 UTC Type:0 Mac:52:54:00:de:6b:36 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:addons-594533 Clientid:01:52:54:00:de:6b:36}
	I0906 23:39:51.112213   14148 main.go:141] libmachine: (addons-594533) DBG | domain addons-594533 has defined IP address 192.168.39.126 and MAC address 52:54:00:de:6b:36 in network mk-addons-594533
	I0906 23:39:51.112342   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHPort
	I0906 23:39:51.112511   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHKeyPath
	I0906 23:39:51.112661   14148 main.go:141] libmachine: (addons-594533) Calling .GetSSHUsername
	I0906 23:39:51.112771   14148 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6521/.minikube/machines/addons-594533/id_rsa Username:docker}
	I0906 23:39:52.705539   14148 pod_ready.go:102] pod "coredns-5dd5756b68-6rrnk" in "kube-system" namespace has status "Ready":"False"
	I0906 23:39:52.901186   14148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.490810749s)
	I0906 23:39:52.901228   14148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.196880371s)
	I0906 23:39:52.901259   14148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.607499635s)
	I0906 23:39:52.901266   14148 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:52.901236   14148 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:52.901282   14148 main.go:141] libmachine: (addons-594533) Calling .Close
	I0906 23:39:52.901288   14148 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:52.901296   14148 main.go:141] libmachine: (addons-594533) Calling .Close
	I0906 23:39:52.901303   14148 main.go:141] libmachine: (addons-594533) Calling .Close
	I0906 23:39:52.901260   14148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.093918627s)
	I0906 23:39:52.901341   14148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.846628843s)
	I0906 23:39:52.901355   14148 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:52.901363   14148 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:52.901370   14148 main.go:141] libmachine: (addons-594533) Calling .Close
	I0906 23:39:52.901374   14148 main.go:141] libmachine: (addons-594533) Calling .Close
	I0906 23:39:52.901448   14148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.252363204s)
	W0906 23:39:52.901477   14148 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0906 23:39:52.901505   14148 retry.go:31] will retry after 191.999822ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0906 23:39:52.901589   14148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.785465029s)
	I0906 23:39:52.901606   14148 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:52.901617   14148 main.go:141] libmachine: (addons-594533) Calling .Close
	I0906 23:39:52.901839   14148 main.go:141] libmachine: (addons-594533) DBG | Closing plugin on server side
	I0906 23:39:52.901880   14148 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:52.901893   14148 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:52.901908   14148 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:52.901917   14148 main.go:141] libmachine: (addons-594533) Calling .Close
	I0906 23:39:52.901932   14148 main.go:141] libmachine: (addons-594533) DBG | Closing plugin on server side
	I0906 23:39:52.901946   14148 main.go:141] libmachine: (addons-594533) DBG | Closing plugin on server side
	I0906 23:39:52.901960   14148 main.go:141] libmachine: (addons-594533) DBG | Closing plugin on server side
	I0906 23:39:52.901986   14148 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:52.901988   14148 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:52.901996   14148 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:52.901998   14148 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:52.902005   14148 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:52.902008   14148 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:52.902014   14148 main.go:141] libmachine: (addons-594533) Calling .Close
	I0906 23:39:52.902016   14148 main.go:141] libmachine: (addons-594533) Calling .Close
	I0906 23:39:52.902076   14148 main.go:141] libmachine: (addons-594533) DBG | Closing plugin on server side
	I0906 23:39:52.902102   14148 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:52.902124   14148 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:52.902140   14148 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:52.902151   14148 main.go:141] libmachine: (addons-594533) Calling .Close
	I0906 23:39:52.902208   14148 main.go:141] libmachine: (addons-594533) DBG | Closing plugin on server side
	I0906 23:39:52.902239   14148 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:52.902246   14148 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:52.902255   14148 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:52.902262   14148 main.go:141] libmachine: (addons-594533) Calling .Close
	I0906 23:39:52.902379   14148 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:52.902399   14148 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:52.902425   14148 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:52.902435   14148 main.go:141] libmachine: (addons-594533) Calling .Close
	I0906 23:39:52.904179   14148 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:52.904195   14148 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:52.904206   14148 addons.go:467] Verifying addon registry=true in "addons-594533"
	I0906 23:39:52.904214   14148 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:52.904226   14148 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:52.904250   14148 main.go:141] libmachine: (addons-594533) DBG | Closing plugin on server side
	I0906 23:39:52.904266   14148 main.go:141] libmachine: (addons-594533) DBG | Closing plugin on server side
	I0906 23:39:52.905981   14148 out.go:177] * Verifying registry addon...
	I0906 23:39:52.904288   14148 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:52.904321   14148 main.go:141] libmachine: (addons-594533) DBG | Closing plugin on server side
	I0906 23:39:52.904339   14148 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:52.907323   14148 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:52.904354   14148 main.go:141] libmachine: (addons-594533) DBG | Closing plugin on server side
	I0906 23:39:52.904375   14148 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:52.907377   14148 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:52.904180   14148 main.go:141] libmachine: (addons-594533) DBG | Closing plugin on server side
	I0906 23:39:52.906011   14148 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:52.907442   14148 addons.go:467] Verifying addon metrics-server=true in "addons-594533"
	I0906 23:39:52.904305   14148 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:52.907495   14148 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:52.907503   14148 addons.go:467] Verifying addon ingress=true in "addons-594533"
	I0906 23:39:52.907390   14148 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:52.908730   14148 out.go:177] * Verifying ingress addon...
	I0906 23:39:52.907583   14148 main.go:141] libmachine: (addons-594533) Calling .Close
	I0906 23:39:52.908197   14148 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0906 23:39:52.908946   14148 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:52.910079   14148 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:52.908972   14148 main.go:141] libmachine: (addons-594533) DBG | Closing plugin on server side
	I0906 23:39:52.910601   14148 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0906 23:39:52.918970   14148 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0906 23:39:52.918992   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:39:52.919473   14148 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0906 23:39:52.919494   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:39:52.923668   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:39:52.924218   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:39:53.094161   14148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 23:39:53.431475   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:39:53.432479   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:39:53.941785   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:39:53.942229   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:39:54.441561   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:39:54.441703   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:39:54.729136   14148 pod_ready.go:102] pod "coredns-5dd5756b68-6rrnk" in "kube-system" namespace has status "Ready":"False"
	I0906 23:39:54.762512   14148 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.653194912s)
	I0906 23:39:54.763987   14148 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0906 23:39:54.762735   14148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.626578597s)
	I0906 23:39:54.765187   14148 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:54.765210   14148 main.go:141] libmachine: (addons-594533) Calling .Close
	I0906 23:39:54.766858   14148 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0906 23:39:54.765460   14148 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:54.765486   14148 main.go:141] libmachine: (addons-594533) DBG | Closing plugin on server side
	I0906 23:39:54.767963   14148 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:54.767992   14148 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:54.768007   14148 main.go:141] libmachine: (addons-594533) Calling .Close
	I0906 23:39:54.767967   14148 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0906 23:39:54.768058   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0906 23:39:54.768282   14148 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:54.768302   14148 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:54.768311   14148 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-594533"
	I0906 23:39:54.768324   14148 main.go:141] libmachine: (addons-594533) DBG | Closing plugin on server side
	I0906 23:39:54.769597   14148 out.go:177] * Verifying csi-hostpath-driver addon...
	I0906 23:39:54.771658   14148 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0906 23:39:54.807840   14148 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0906 23:39:54.807859   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:39:54.833347   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:39:54.928919   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:39:54.930183   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:39:54.941154   14148 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0906 23:39:54.941176   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0906 23:39:55.042764   14148 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0906 23:39:55.042784   14148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0906 23:39:55.172908   14148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0906 23:39:55.341436   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:39:55.432563   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:39:55.433214   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:39:55.840053   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:39:55.868771   14148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.774553799s)
	I0906 23:39:55.868823   14148 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:55.868837   14148 main.go:141] libmachine: (addons-594533) Calling .Close
	I0906 23:39:55.869109   14148 main.go:141] libmachine: (addons-594533) DBG | Closing plugin on server side
	I0906 23:39:55.869168   14148 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:55.869187   14148 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:55.869208   14148 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:55.869222   14148 main.go:141] libmachine: (addons-594533) Calling .Close
	I0906 23:39:55.869490   14148 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:55.869516   14148 main.go:141] libmachine: (addons-594533) DBG | Closing plugin on server side
	I0906 23:39:55.869521   14148 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:55.931722   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:39:55.932078   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:39:56.340186   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:39:56.430939   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:39:56.431126   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:39:56.838731   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:39:56.930893   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:39:56.932108   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:39:57.114005   14148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.941064693s)
	I0906 23:39:57.114062   14148 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:57.114072   14148 main.go:141] libmachine: (addons-594533) Calling .Close
	I0906 23:39:57.114351   14148 main.go:141] libmachine: (addons-594533) DBG | Closing plugin on server side
	I0906 23:39:57.114423   14148 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:57.114442   14148 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:57.114462   14148 main.go:141] libmachine: Making call to close driver server
	I0906 23:39:57.114476   14148 main.go:141] libmachine: (addons-594533) Calling .Close
	I0906 23:39:57.114768   14148 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:39:57.114816   14148 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:39:57.116732   14148 addons.go:467] Verifying addon gcp-auth=true in "addons-594533"
	I0906 23:39:57.118255   14148 out.go:177] * Verifying gcp-auth addon...
	I0906 23:39:57.120493   14148 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0906 23:39:57.131006   14148 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0906 23:39:57.131021   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:39:57.138397   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:39:57.201603   14148 pod_ready.go:102] pod "coredns-5dd5756b68-6rrnk" in "kube-system" namespace has status "Ready":"False"
	I0906 23:39:57.342426   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:39:57.428617   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:39:57.429896   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:39:57.642188   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:39:57.839290   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:39:57.930590   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:39:57.930591   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:39:58.142584   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:39:58.342169   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:39:58.429908   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:39:58.431442   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:39:58.642043   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:39:58.838830   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:39:58.929995   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:39:58.930300   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:39:59.143138   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:39:59.344989   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:39:59.431483   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:39:59.431731   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:39:59.643667   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:39:59.700662   14148 pod_ready.go:102] pod "coredns-5dd5756b68-6rrnk" in "kube-system" namespace has status "Ready":"False"
	I0906 23:39:59.839077   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:39:59.929604   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:39:59.930924   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:00.142337   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:00.339353   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:00.429365   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:00.430682   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:00.642616   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:00.839904   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:00.930127   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:00.931288   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:01.142469   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:01.344182   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:01.431651   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:01.431985   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:01.649205   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:01.840395   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:01.931208   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:01.931296   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:02.142360   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:02.200865   14148 pod_ready.go:102] pod "coredns-5dd5756b68-6rrnk" in "kube-system" namespace has status "Ready":"False"
	I0906 23:40:02.339972   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:02.431849   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:02.433915   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:02.642496   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:02.840596   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:02.930105   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:02.938313   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:03.143286   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:03.339285   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:03.430238   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:03.430532   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:03.642010   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:03.843372   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:04.278858   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:04.279367   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:04.279658   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:04.282138   14148 pod_ready.go:102] pod "coredns-5dd5756b68-6rrnk" in "kube-system" namespace has status "Ready":"False"
	I0906 23:40:04.341746   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:04.428728   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:04.428741   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:04.642815   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:04.840467   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:04.929038   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:04.931923   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:05.141944   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:05.340207   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:05.431287   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:05.431756   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:05.642376   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:05.840718   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:05.929421   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:05.932720   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:06.141656   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:06.341051   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:06.429501   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:06.430181   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:06.642501   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:06.701043   14148 pod_ready.go:102] pod "coredns-5dd5756b68-6rrnk" in "kube-system" namespace has status "Ready":"False"
	I0906 23:40:06.839838   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:06.928739   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:06.928780   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:07.142462   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:07.339852   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:07.428572   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:07.430660   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:07.760193   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:07.839347   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:07.931685   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:07.933503   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:08.143090   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:08.338986   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:08.432701   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:08.434776   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:08.642874   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:08.702296   14148 pod_ready.go:102] pod "coredns-5dd5756b68-6rrnk" in "kube-system" namespace has status "Ready":"False"
	I0906 23:40:08.843746   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:08.946522   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:08.948348   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:09.142781   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:09.340923   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:09.428335   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:09.430316   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:09.642747   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:09.839624   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:09.930878   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:09.931139   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:10.141930   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:10.340332   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:10.430612   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:10.434360   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:10.643654   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:10.839331   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:10.930479   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:10.930892   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:11.142694   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:11.200510   14148 pod_ready.go:102] pod "coredns-5dd5756b68-6rrnk" in "kube-system" namespace has status "Ready":"False"
	I0906 23:40:11.339926   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:11.430095   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:11.431304   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:11.789368   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:11.838122   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:11.930233   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:11.931028   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:12.142430   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:12.339446   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:12.432432   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:12.435854   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:12.642409   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:12.840337   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:12.930999   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:12.931278   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:13.148668   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:13.340143   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:13.430005   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:13.431440   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:13.643090   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:13.700514   14148 pod_ready.go:102] pod "coredns-5dd5756b68-6rrnk" in "kube-system" namespace has status "Ready":"False"
	I0906 23:40:14.159306   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:14.160207   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:14.160311   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:14.161671   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:14.339085   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:14.429559   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:14.431080   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:14.642068   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:14.840921   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:14.929986   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:14.930714   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:15.142568   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:15.338847   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:15.430117   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:15.436429   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:15.642426   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:15.839410   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:15.931847   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:15.933170   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:16.142878   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:16.200826   14148 pod_ready.go:102] pod "coredns-5dd5756b68-6rrnk" in "kube-system" namespace has status "Ready":"False"
	I0906 23:40:16.342541   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:16.430364   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:16.430521   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:16.642531   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:16.838634   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:16.936784   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:16.936783   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:17.142819   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:17.339845   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:17.428447   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:17.430522   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:17.651740   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:17.840617   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:17.930091   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:17.932489   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:18.142938   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:18.339826   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:18.428255   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:18.430204   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:18.642371   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:18.699621   14148 pod_ready.go:102] pod "coredns-5dd5756b68-6rrnk" in "kube-system" namespace has status "Ready":"False"
	I0906 23:40:18.838590   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:18.928260   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:18.929888   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:19.142649   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:19.343630   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:19.428301   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:19.429868   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:19.642602   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:19.840125   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:19.930575   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:19.933023   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:20.142687   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:20.340892   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:20.429370   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:20.430178   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:20.642801   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:20.703176   14148 pod_ready.go:102] pod "coredns-5dd5756b68-6rrnk" in "kube-system" namespace has status "Ready":"False"
	I0906 23:40:20.840063   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:20.934400   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:20.935480   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:21.143715   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:21.340349   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:21.430354   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:21.431845   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:21.651462   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:21.839615   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:21.930389   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:21.931568   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:22.143391   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:22.340208   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:22.427958   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:22.432791   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:22.645852   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:22.838880   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:22.928944   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:22.929485   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:23.143466   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:23.200097   14148 pod_ready.go:102] pod "coredns-5dd5756b68-6rrnk" in "kube-system" namespace has status "Ready":"False"
	I0906 23:40:23.338953   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:23.429449   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:23.430535   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:23.642513   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:23.840364   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:23.929592   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:23.929890   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:24.144424   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:24.350495   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:24.430474   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:24.431035   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:24.642541   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:24.701904   14148 pod_ready.go:92] pod "coredns-5dd5756b68-6rrnk" in "kube-system" namespace has status "Ready":"True"
	I0906 23:40:24.701924   14148 pod_ready.go:81] duration metric: took 38.521186161s waiting for pod "coredns-5dd5756b68-6rrnk" in "kube-system" namespace to be "Ready" ...
	I0906 23:40:24.701932   14148 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-594533" in "kube-system" namespace to be "Ready" ...
	I0906 23:40:24.710861   14148 pod_ready.go:92] pod "etcd-addons-594533" in "kube-system" namespace has status "Ready":"True"
	I0906 23:40:24.710878   14148 pod_ready.go:81] duration metric: took 8.939974ms waiting for pod "etcd-addons-594533" in "kube-system" namespace to be "Ready" ...
	I0906 23:40:24.710887   14148 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-594533" in "kube-system" namespace to be "Ready" ...
	I0906 23:40:24.718132   14148 pod_ready.go:92] pod "kube-apiserver-addons-594533" in "kube-system" namespace has status "Ready":"True"
	I0906 23:40:24.718146   14148 pod_ready.go:81] duration metric: took 7.253223ms waiting for pod "kube-apiserver-addons-594533" in "kube-system" namespace to be "Ready" ...
	I0906 23:40:24.718155   14148 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-594533" in "kube-system" namespace to be "Ready" ...
	I0906 23:40:24.722421   14148 pod_ready.go:92] pod "kube-controller-manager-addons-594533" in "kube-system" namespace has status "Ready":"True"
	I0906 23:40:24.722436   14148 pod_ready.go:81] duration metric: took 4.275816ms waiting for pod "kube-controller-manager-addons-594533" in "kube-system" namespace to be "Ready" ...
	I0906 23:40:24.722444   14148 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zwth6" in "kube-system" namespace to be "Ready" ...
	I0906 23:40:24.734419   14148 pod_ready.go:92] pod "kube-proxy-zwth6" in "kube-system" namespace has status "Ready":"True"
	I0906 23:40:24.734440   14148 pod_ready.go:81] duration metric: took 11.990343ms waiting for pod "kube-proxy-zwth6" in "kube-system" namespace to be "Ready" ...
	I0906 23:40:24.734453   14148 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-594533" in "kube-system" namespace to be "Ready" ...
	I0906 23:40:24.846706   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:24.930337   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:24.931293   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:25.097522   14148 pod_ready.go:92] pod "kube-scheduler-addons-594533" in "kube-system" namespace has status "Ready":"True"
	I0906 23:40:25.097546   14148 pod_ready.go:81] duration metric: took 363.085703ms waiting for pod "kube-scheduler-addons-594533" in "kube-system" namespace to be "Ready" ...
	I0906 23:40:25.097556   14148 pod_ready.go:38] duration metric: took 40.965882278s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 23:40:25.097587   14148 api_server.go:52] waiting for apiserver process to appear ...
	I0906 23:40:25.097646   14148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 23:40:25.120070   14148 api_server.go:72] duration metric: took 41.436034144s to wait for apiserver process to appear ...
	I0906 23:40:25.120102   14148 api_server.go:88] waiting for apiserver healthz status ...
	I0906 23:40:25.120120   14148 api_server.go:253] Checking apiserver healthz at https://192.168.39.126:8443/healthz ...
	I0906 23:40:25.126427   14148 api_server.go:279] https://192.168.39.126:8443/healthz returned 200:
	ok
	I0906 23:40:25.127592   14148 api_server.go:141] control plane version: v1.28.1
	I0906 23:40:25.127613   14148 api_server.go:131] duration metric: took 7.504077ms to wait for apiserver health ...
	I0906 23:40:25.127623   14148 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 23:40:25.141641   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:25.305153   14148 system_pods.go:59] 17 kube-system pods found
	I0906 23:40:25.305182   14148 system_pods.go:61] "coredns-5dd5756b68-6rrnk" [d1c8a155-2f82-4e4f-bdbc-dd482f63dd2e] Running
	I0906 23:40:25.305193   14148 system_pods.go:61] "csi-hostpath-attacher-0" [72f598fb-0205-44a3-8af9-055938a28913] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0906 23:40:25.305206   14148 system_pods.go:61] "csi-hostpath-resizer-0" [4ab6c9bf-66f9-494d-91b7-4b3152b26a6f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0906 23:40:25.305217   14148 system_pods.go:61] "csi-hostpathplugin-hfzsf" [aa950cd2-ce5b-474e-bac3-7b8a59e03481] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0906 23:40:25.305227   14148 system_pods.go:61] "etcd-addons-594533" [7e6978df-93ad-430f-9449-fb08a349568d] Running
	I0906 23:40:25.305236   14148 system_pods.go:61] "kube-apiserver-addons-594533" [5b66e4f4-9ac0-4877-9392-23252c868b2a] Running
	I0906 23:40:25.305248   14148 system_pods.go:61] "kube-controller-manager-addons-594533" [d943931a-4f81-4c8e-8efc-534b980764ba] Running
	I0906 23:40:25.305256   14148 system_pods.go:61] "kube-ingress-dns-minikube" [7afdde20-124b-43b1-a8a8-134de621477a] Running
	I0906 23:40:25.305263   14148 system_pods.go:61] "kube-proxy-zwth6" [8b3f23a5-8203-4082-98b4-0ca557e9021c] Running
	I0906 23:40:25.305273   14148 system_pods.go:61] "kube-scheduler-addons-594533" [8911102a-a8c2-49fb-9ff4-f7a31fc3e58c] Running
	I0906 23:40:25.305281   14148 system_pods.go:61] "metrics-server-7c66d45ddc-74zf4" [5fe11905-0b57-4b88-8c59-c997401fadee] Running
	I0906 23:40:25.305289   14148 system_pods.go:61] "registry-crq7x" [1215db8a-b169-4deb-a49a-11998b9284ea] Running
	I0906 23:40:25.305299   14148 system_pods.go:61] "registry-proxy-pv7gw" [b92465d6-7cfe-40b4-a367-789f7718636f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0906 23:40:25.305314   14148 system_pods.go:61] "snapshot-controller-58dbcc7b99-7lwpz" [d2f34ff8-7730-4b60-ba53-b91b2418254f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 23:40:25.305326   14148 system_pods.go:61] "snapshot-controller-58dbcc7b99-lqklg" [a629d687-b05d-410f-9051-bde87a24c4cf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 23:40:25.305337   14148 system_pods.go:61] "storage-provisioner" [e2dff4a9-b199-48ee-a893-b9f409f6ea93] Running
	I0906 23:40:25.305347   14148 system_pods.go:61] "tiller-deploy-7b677967b9-b2jtq" [2dc7a85e-be48-41ae-a2b0-7fc4b48cdf5c] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0906 23:40:25.305358   14148 system_pods.go:74] duration metric: took 177.728657ms to wait for pod list to return data ...
	I0906 23:40:25.305367   14148 default_sa.go:34] waiting for default service account to be created ...
	I0906 23:40:25.339003   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:25.433888   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:25.435483   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:25.497167   14148 default_sa.go:45] found service account: "default"
	I0906 23:40:25.497193   14148 default_sa.go:55] duration metric: took 191.818058ms for default service account to be created ...
	I0906 23:40:25.497204   14148 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 23:40:25.642958   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:25.703499   14148 system_pods.go:86] 17 kube-system pods found
	I0906 23:40:25.703523   14148 system_pods.go:89] "coredns-5dd5756b68-6rrnk" [d1c8a155-2f82-4e4f-bdbc-dd482f63dd2e] Running
	I0906 23:40:25.703533   14148 system_pods.go:89] "csi-hostpath-attacher-0" [72f598fb-0205-44a3-8af9-055938a28913] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0906 23:40:25.703541   14148 system_pods.go:89] "csi-hostpath-resizer-0" [4ab6c9bf-66f9-494d-91b7-4b3152b26a6f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0906 23:40:25.703549   14148 system_pods.go:89] "csi-hostpathplugin-hfzsf" [aa950cd2-ce5b-474e-bac3-7b8a59e03481] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0906 23:40:25.703554   14148 system_pods.go:89] "etcd-addons-594533" [7e6978df-93ad-430f-9449-fb08a349568d] Running
	I0906 23:40:25.703559   14148 system_pods.go:89] "kube-apiserver-addons-594533" [5b66e4f4-9ac0-4877-9392-23252c868b2a] Running
	I0906 23:40:25.703563   14148 system_pods.go:89] "kube-controller-manager-addons-594533" [d943931a-4f81-4c8e-8efc-534b980764ba] Running
	I0906 23:40:25.703567   14148 system_pods.go:89] "kube-ingress-dns-minikube" [7afdde20-124b-43b1-a8a8-134de621477a] Running
	I0906 23:40:25.703572   14148 system_pods.go:89] "kube-proxy-zwth6" [8b3f23a5-8203-4082-98b4-0ca557e9021c] Running
	I0906 23:40:25.703583   14148 system_pods.go:89] "kube-scheduler-addons-594533" [8911102a-a8c2-49fb-9ff4-f7a31fc3e58c] Running
	I0906 23:40:25.703588   14148 system_pods.go:89] "metrics-server-7c66d45ddc-74zf4" [5fe11905-0b57-4b88-8c59-c997401fadee] Running
	I0906 23:40:25.703593   14148 system_pods.go:89] "registry-crq7x" [1215db8a-b169-4deb-a49a-11998b9284ea] Running
	I0906 23:40:25.703598   14148 system_pods.go:89] "registry-proxy-pv7gw" [b92465d6-7cfe-40b4-a367-789f7718636f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0906 23:40:25.703605   14148 system_pods.go:89] "snapshot-controller-58dbcc7b99-7lwpz" [d2f34ff8-7730-4b60-ba53-b91b2418254f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 23:40:25.703612   14148 system_pods.go:89] "snapshot-controller-58dbcc7b99-lqklg" [a629d687-b05d-410f-9051-bde87a24c4cf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 23:40:25.703618   14148 system_pods.go:89] "storage-provisioner" [e2dff4a9-b199-48ee-a893-b9f409f6ea93] Running
	I0906 23:40:25.703624   14148 system_pods.go:89] "tiller-deploy-7b677967b9-b2jtq" [2dc7a85e-be48-41ae-a2b0-7fc4b48cdf5c] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0906 23:40:25.703631   14148 system_pods.go:126] duration metric: took 206.421456ms to wait for k8s-apps to be running ...
	I0906 23:40:25.703638   14148 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 23:40:25.703677   14148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 23:40:25.719835   14148 system_svc.go:56] duration metric: took 16.188177ms WaitForService to wait for kubelet.
	I0906 23:40:25.719856   14148 kubeadm.go:581] duration metric: took 42.035828337s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0906 23:40:25.719873   14148 node_conditions.go:102] verifying NodePressure condition ...
	I0906 23:40:25.843331   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:25.898304   14148 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0906 23:40:25.898344   14148 node_conditions.go:123] node cpu capacity is 2
	I0906 23:40:25.898360   14148 node_conditions.go:105] duration metric: took 178.481602ms to run NodePressure ...
	I0906 23:40:25.898375   14148 start.go:228] waiting for startup goroutines ...
	I0906 23:40:25.928896   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:25.931739   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:26.142195   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:26.338932   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:26.430667   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:26.433961   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:26.642481   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:26.839548   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:26.933209   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:26.935929   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:27.142265   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:27.340756   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:27.428520   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:27.430486   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:40:27.642728   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:27.839245   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:27.929403   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:27.929738   14148 kapi.go:107] duration metric: took 35.021539415s to wait for kubernetes.io/minikube-addons=registry ...
	I0906 23:40:28.141974   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:28.339396   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:28.429585   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:28.642241   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:28.839457   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:28.931309   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:29.142464   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:29.339565   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:29.428882   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:29.642614   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:29.840627   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:29.928083   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:30.142373   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:30.341791   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:30.430791   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:30.646168   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:30.839939   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:30.929986   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:31.142232   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:31.339583   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:31.428175   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:31.879233   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:31.879475   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:31.928433   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:32.142805   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:32.339451   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:32.431095   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:32.648212   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:32.839312   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:32.930571   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:33.142393   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:33.340787   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:33.428554   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:33.659944   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:33.841652   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:33.928708   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:34.143395   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:34.343494   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:34.428463   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:34.642930   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:34.841002   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:34.928536   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:35.147846   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:35.341027   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:35.429687   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:35.643654   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:36.179249   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:36.180113   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:36.184920   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:36.339410   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:36.432829   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:36.643587   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:36.841583   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:36.929043   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:37.143911   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:37.345375   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:37.429580   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:37.642606   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:37.839393   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:37.929018   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:38.142213   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:38.340534   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:38.428754   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:38.645390   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:38.842821   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:38.928351   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:39.142373   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:39.344579   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:39.428867   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:39.642676   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:39.839297   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:39.931610   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:40.143662   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:40.337878   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:40.430751   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:40.644119   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:40.840788   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:40.928014   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:41.145314   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:41.342008   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:41.429198   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:41.642821   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:41.839058   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:41.934900   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:42.142792   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:42.341185   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:42.431539   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:42.649208   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:42.839803   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:42.932883   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:43.142902   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:43.342399   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:43.429223   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:43.650180   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:43.838708   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:43.928480   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:44.142577   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:44.338670   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:44.428790   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:44.642898   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:44.842583   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:40:44.928892   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:45.142479   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:45.339752   14148 kapi.go:107] duration metric: took 50.568089442s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0906 23:40:45.430707   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:45.643221   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:45.928758   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:46.147818   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:46.428351   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:46.643145   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:46.928801   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:47.141971   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:47.428628   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:47.642850   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:47.928174   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:48.142189   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:48.428816   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:48.642897   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:48.928330   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:49.141962   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:49.432476   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:49.642985   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:49.928378   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:50.142554   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:50.429479   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:50.644612   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:50.929484   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:51.142729   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:51.429370   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:51.642852   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:51.928052   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:52.142105   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:52.429043   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:52.642250   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:52.928686   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:53.142690   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:53.428174   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:53.643129   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:53.929184   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:54.142672   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:54.429877   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:54.643498   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:54.929107   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:55.143578   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:55.429440   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:55.643820   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:55.928239   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:56.142952   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:56.429330   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:56.642003   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:56.928654   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:57.142220   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:57.428877   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:57.642234   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:57.928819   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:58.143117   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:58.440581   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:58.642212   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:58.929864   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:59.142653   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:59.429538   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:40:59.642639   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:40:59.930548   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:41:00.142224   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:00.429330   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:41:00.645929   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:00.929097   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:41:01.141865   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:01.428692   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:41:01.642257   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:01.929674   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:41:02.142554   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:02.429217   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:41:02.642337   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:02.928770   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:41:03.142934   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:03.428989   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:41:03.642901   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:03.929062   14148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:41:04.142686   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:04.429615   14148 kapi.go:107] duration metric: took 1m11.519008317s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0906 23:41:04.643114   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:05.141851   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:05.642970   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:06.143278   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:06.642757   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:07.142667   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:07.642931   14148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:41:08.142549   14148 kapi.go:107] duration metric: took 1m11.022050164s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0906 23:41:08.143960   14148 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-594533 cluster.
	I0906 23:41:08.145246   14148 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0906 23:41:08.146508   14148 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0906 23:41:08.147689   14148 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, storage-provisioner, helm-tiller, inspektor-gadget, metrics-server, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0906 23:41:08.148901   14148 addons.go:502] enable addons completed in 1m24.582442136s: enabled=[cloud-spanner ingress-dns storage-provisioner helm-tiller inspektor-gadget metrics-server default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0906 23:41:08.148934   14148 start.go:233] waiting for cluster config update ...
	I0906 23:41:08.148948   14148 start.go:242] writing updated cluster config ...
	I0906 23:41:08.149227   14148 ssh_runner.go:195] Run: rm -f paused
	I0906 23:41:08.197875   14148 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0906 23:41:08.199529   14148 out.go:177] * Done! kubectl is now configured to use "addons-594533" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID
	25ea5a7422584       433dbc17191a7       6 seconds ago        Running             nginx                                    0                   5b95aecfa2bc7
	d6161de68d946       6d2a98b274382       23 seconds ago       Running             gcp-auth                                 0                   6b336a741b479
	3bded817b5270       825aff16c20cc       26 seconds ago       Running             controller                               0                   f72a3115cbe48
	ad7e7c33e0f29       738351fd438f0       45 seconds ago       Running             csi-snapshotter                          0                   1ecaea5bd642f
	7c32cea2841b0       931dbfd16f87c       46 seconds ago       Running             csi-provisioner                          0                   1ecaea5bd642f
	3c08fc9aa94a5       e899260153aed       48 seconds ago       Running             liveness-probe                           0                   1ecaea5bd642f
	5ee40711bac35       e255e073c508c       49 seconds ago       Running             hostpath                                 0                   1ecaea5bd642f
	6ba0213a3e900       88ef14a257f42       50 seconds ago       Running             node-driver-registrar                    0                   1ecaea5bd642f
	415cdbe045507       7e7451bb70423       51 seconds ago       Exited              patch                                    0                   02edd88ea9107
	7e2c44c6e0922       7e7451bb70423       51 seconds ago       Exited              create                                   0                   dc97c06d2dc8a
	efeee0e32800c       19a639eda60f0       51 seconds ago       Running             csi-resizer                              0                   b91299b6326e0
	0058c3a7e868b       59cbb42146a37       53 seconds ago       Running             csi-attacher                             0                   f845b8c1f1f6e
	a5ae000e6a920       a1ed5895ba635       55 seconds ago       Running             csi-external-health-monitor-controller   0                   1ecaea5bd642f
	815422ee3f52d       7e7451bb70423       55 seconds ago       Exited              patch                                    1                   a854d13dbaa75
	df252bcefa77e       7e7451bb70423       56 seconds ago       Exited              create                                   0                   d1f3d0118f16b
	4da248514bdb9       aa61ee9c70bc4       58 seconds ago       Running             volume-snapshot-controller               0                   eb4136f3e6069
	93d3714dbaf8b       aa61ee9c70bc4       58 seconds ago       Running             volume-snapshot-controller               0                   fc60e8a69f559
	c647c734b5d07       3f39089e90831       59 seconds ago       Running             tiller                                   0                   797a2316e6d30
	2142372f1f5ba       d2fd211e7dcaa       About a minute ago   Running             registry-proxy                           0                   2fd54a3cc2fb6
	a622452e5ae27       3a0f7b0a13ef6       About a minute ago   Running             registry                                 0                   71de36b9f388f
	1f7d7e2decc17       1499ed4fbd0aa       About a minute ago   Running             minikube-ingress-dns                     0                   1db9f31f5cd33
	706f82c1f995b       6e38f40d628db       About a minute ago   Running             storage-provisioner                      0                   c4024af6c0dfb
	f71af02b65734       ead0a4a53df89       About a minute ago   Running             coredns                                  0                   7167c7b51cab9
	a61ae532d8087       6cdbabde3874e       About a minute ago   Running             kube-proxy                               0                   8ffb29f4369bc
	12e619be78929       73deb9a3f7025       2 minutes ago        Running             etcd                                     0                   7becedd977638
	058cf62f54b61       b462ce0c8b1ff       2 minutes ago        Running             kube-scheduler                           0                   4a7b5a4b33dc0
	6c151abce636c       821b3dfea27be       2 minutes ago        Running             kube-controller-manager                  0                   7f024fcb3e913
	11d32dc02c2c3       5c801295c21d0       2 minutes ago        Running             kube-apiserver                           0                   8f332c1b446e2
	
	* 
	* ==> containerd <==
	* -- Journal begins at Wed 2023-09-06 23:38:54 UTC, ends at Wed 2023-09-06 23:41:30 UTC. --
	Sep 06 23:41:28 addons-594533 containerd[689]: time="2023-09-06T23:41:28.155067372Z" level=info msg="StopPodSandbox for \"eed725b2dfc804aba0118d44961f6456fb84bf1b5839a64aa65d296a0ec8e87b\""
	Sep 06 23:41:28 addons-594533 containerd[689]: time="2023-09-06T23:41:28.155157421Z" level=info msg="Container to stop \"3cabc6d3c238493535ce6e268a8dd3586d2ee83bae1dff17cb93e3e17a76535d\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Sep 06 23:41:28 addons-594533 containerd[689]: time="2023-09-06T23:41:28.199428839Z" level=info msg="shim disconnected" id=eed725b2dfc804aba0118d44961f6456fb84bf1b5839a64aa65d296a0ec8e87b namespace=k8s.io
	Sep 06 23:41:28 addons-594533 containerd[689]: time="2023-09-06T23:41:28.199495811Z" level=warning msg="cleaning up after shim disconnected" id=eed725b2dfc804aba0118d44961f6456fb84bf1b5839a64aa65d296a0ec8e87b namespace=k8s.io
	Sep 06 23:41:28 addons-594533 containerd[689]: time="2023-09-06T23:41:28.199509350Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 06 23:41:28 addons-594533 containerd[689]: time="2023-09-06T23:41:28.257604457Z" level=info msg="TearDown network for sandbox \"eed725b2dfc804aba0118d44961f6456fb84bf1b5839a64aa65d296a0ec8e87b\" successfully"
	Sep 06 23:41:28 addons-594533 containerd[689]: time="2023-09-06T23:41:28.257658248Z" level=info msg="StopPodSandbox for \"eed725b2dfc804aba0118d44961f6456fb84bf1b5839a64aa65d296a0ec8e87b\" returns successfully"
	Sep 06 23:41:28 addons-594533 containerd[689]: time="2023-09-06T23:41:28.656925754Z" level=info msg="StopContainer for \"b575e05c104ede0837594807b52f2388530355c4bc92a175d2aec4d7468ff207\" with timeout 30 (s)"
	Sep 06 23:41:28 addons-594533 containerd[689]: time="2023-09-06T23:41:28.658711225Z" level=info msg="Stop container \"b575e05c104ede0837594807b52f2388530355c4bc92a175d2aec4d7468ff207\" with signal quit"
	Sep 06 23:41:28 addons-594533 containerd[689]: time="2023-09-06T23:41:28.734380379Z" level=info msg="shim disconnected" id=b575e05c104ede0837594807b52f2388530355c4bc92a175d2aec4d7468ff207 namespace=k8s.io
	Sep 06 23:41:28 addons-594533 containerd[689]: time="2023-09-06T23:41:28.734448548Z" level=warning msg="cleaning up after shim disconnected" id=b575e05c104ede0837594807b52f2388530355c4bc92a175d2aec4d7468ff207 namespace=k8s.io
	Sep 06 23:41:28 addons-594533 containerd[689]: time="2023-09-06T23:41:28.734459587Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 06 23:41:28 addons-594533 containerd[689]: time="2023-09-06T23:41:28.766139673Z" level=info msg="StopContainer for \"b575e05c104ede0837594807b52f2388530355c4bc92a175d2aec4d7468ff207\" returns successfully"
	Sep 06 23:41:28 addons-594533 containerd[689]: time="2023-09-06T23:41:28.766775480Z" level=info msg="StopPodSandbox for \"eebeaf95ad7408500b89c10e81f93241c9f7f71cd7a902cb3d6721ae97897935\""
	Sep 06 23:41:28 addons-594533 containerd[689]: time="2023-09-06T23:41:28.766865349Z" level=info msg="Container to stop \"b575e05c104ede0837594807b52f2388530355c4bc92a175d2aec4d7468ff207\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Sep 06 23:41:28 addons-594533 containerd[689]: time="2023-09-06T23:41:28.864133037Z" level=info msg="shim disconnected" id=eebeaf95ad7408500b89c10e81f93241c9f7f71cd7a902cb3d6721ae97897935 namespace=k8s.io
	Sep 06 23:41:28 addons-594533 containerd[689]: time="2023-09-06T23:41:28.864222166Z" level=warning msg="cleaning up after shim disconnected" id=eebeaf95ad7408500b89c10e81f93241c9f7f71cd7a902cb3d6721ae97897935 namespace=k8s.io
	Sep 06 23:41:28 addons-594533 containerd[689]: time="2023-09-06T23:41:28.864233951Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 06 23:41:28 addons-594533 containerd[689]: time="2023-09-06T23:41:28.940084659Z" level=info msg="TearDown network for sandbox \"eebeaf95ad7408500b89c10e81f93241c9f7f71cd7a902cb3d6721ae97897935\" successfully"
	Sep 06 23:41:28 addons-594533 containerd[689]: time="2023-09-06T23:41:28.940120491Z" level=info msg="StopPodSandbox for \"eebeaf95ad7408500b89c10e81f93241c9f7f71cd7a902cb3d6721ae97897935\" returns successfully"
	Sep 06 23:41:29 addons-594533 containerd[689]: time="2023-09-06T23:41:29.178369140Z" level=info msg="RemoveContainer for \"b575e05c104ede0837594807b52f2388530355c4bc92a175d2aec4d7468ff207\""
	Sep 06 23:41:29 addons-594533 containerd[689]: time="2023-09-06T23:41:29.295865687Z" level=info msg="RemoveContainer for \"b575e05c104ede0837594807b52f2388530355c4bc92a175d2aec4d7468ff207\" returns successfully"
	Sep 06 23:41:29 addons-594533 containerd[689]: time="2023-09-06T23:41:29.316315834Z" level=error msg="ContainerStatus for \"b575e05c104ede0837594807b52f2388530355c4bc92a175d2aec4d7468ff207\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b575e05c104ede0837594807b52f2388530355c4bc92a175d2aec4d7468ff207\": not found"
	Sep 06 23:41:29 addons-594533 containerd[689]: time="2023-09-06T23:41:29.352102046Z" level=info msg="RemoveContainer for \"3cabc6d3c238493535ce6e268a8dd3586d2ee83bae1dff17cb93e3e17a76535d\""
	Sep 06 23:41:29 addons-594533 containerd[689]: time="2023-09-06T23:41:29.377279235Z" level=info msg="RemoveContainer for \"3cabc6d3c238493535ce6e268a8dd3586d2ee83bae1dff17cb93e3e17a76535d\" returns successfully"
	
	* 
	* ==> coredns [f71af02b65734462b3c43e191f49a95ebc62b196bc0c6d1d9362c6bf05d2cf70] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 10.244.0.18:51884 - 42589 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000297038s
	[INFO] 10.244.0.18:60764 - 2101 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000151291s
	[INFO] 10.244.0.18:41373 - 14845 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000123965s
	[INFO] 10.244.0.18:44042 - 59684 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000061453s
	[INFO] 10.244.0.18:60053 - 32592 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000142884s
	[INFO] 10.244.0.18:60968 - 52669 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000070429s
	[INFO] 10.244.0.18:59492 - 41871 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002690791s
	[INFO] 10.244.0.18:45220 - 40095 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 192 0.003406456s
	[INFO] 10.244.0.21:38532 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000362245s
	[INFO] 10.244.0.21:50324 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000200415s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-594533
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-594533
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cf47a38f14700a28a638c18f21764b75f0a296b2
	                    minikube.k8s.io/name=addons-594533
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_06T23_39_30_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-594533
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-594533"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 06 Sep 2023 23:39:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-594533
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 06 Sep 2023 23:41:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 06 Sep 2023 23:41:03 +0000   Wed, 06 Sep 2023 23:39:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 06 Sep 2023 23:41:03 +0000   Wed, 06 Sep 2023 23:39:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 06 Sep 2023 23:41:03 +0000   Wed, 06 Sep 2023 23:39:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 06 Sep 2023 23:41:03 +0000   Wed, 06 Sep 2023 23:39:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.126
	  Hostname:    addons-594533
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914504Ki
	  pods:               110
	System Info:
	  Machine ID:                 bbe2cf365b99475ea61525431c9c6cf7
	  System UUID:                bbe2cf36-5b99-475e-a615-25431c9c6cf7
	  Boot ID:                    e7e20982-60a9-4059-953e-e5ef9b958f48
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.3
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     nginx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	  gcp-auth                    gcp-auth-d4c87556c-ngzd2                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  headlamp                    headlamp-699c48fb74-nlnjs                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  ingress-nginx               ingress-nginx-controller-5dcd45b5bf-x9m5b    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         98s
	  kube-system                 coredns-5dd5756b68-6rrnk                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     107s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 csi-hostpathplugin-hfzsf                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 etcd-addons-594533                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m
	  kube-system                 kube-apiserver-addons-594533                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kube-system                 kube-controller-manager-addons-594533        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  kube-system                 kube-proxy-zwth6                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	  kube-system                 kube-scheduler-addons-594533                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kube-system                 registry-crq7x                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  kube-system                 registry-proxy-pv7gw                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  kube-system                 snapshot-controller-58dbcc7b99-7lwpz         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 snapshot-controller-58dbcc7b99-lqklg         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 tiller-deploy-7b677967b9-b2jtq               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             260Mi (6%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 105s                 kube-proxy       
	  Normal  Starting                 2m9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m9s (x8 over 2m9s)  kubelet          Node addons-594533 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m9s (x8 over 2m9s)  kubelet          Node addons-594533 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m9s (x7 over 2m9s)  kubelet          Node addons-594533 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m                   kubelet          Node addons-594533 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m                   kubelet          Node addons-594533 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m                   kubelet          Node addons-594533 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m                   kubelet          Node addons-594533 status is now: NodeReady
	  Normal  RegisteredNode           107s                 node-controller  Node addons-594533 event: Registered Node addons-594533 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.103401] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.367534] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.339596] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.147977] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.047252] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep 6 23:39] systemd-fstab-generator[557]: Ignoring "noauto" for root device
	[  +0.107663] systemd-fstab-generator[568]: Ignoring "noauto" for root device
	[  +0.144986] systemd-fstab-generator[582]: Ignoring "noauto" for root device
	[  +0.112836] systemd-fstab-generator[593]: Ignoring "noauto" for root device
	[  +0.240583] systemd-fstab-generator[620]: Ignoring "noauto" for root device
	[  +6.428139] systemd-fstab-generator[680]: Ignoring "noauto" for root device
	[  +5.292249] systemd-fstab-generator[838]: Ignoring "noauto" for root device
	[  +9.229946] systemd-fstab-generator[1209]: Ignoring "noauto" for root device
	[ +19.346817] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.327445] kauditd_printk_skb: 44 callbacks suppressed
	[Sep 6 23:40] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.339319] kauditd_printk_skb: 16 callbacks suppressed
	[ +19.249439] kauditd_printk_skb: 6 callbacks suppressed
	[Sep 6 23:41] kauditd_printk_skb: 8 callbacks suppressed
	[ +14.199400] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.969012] kauditd_printk_skb: 6 callbacks suppressed
	
	* 
	* ==> etcd [12e619be78929e24934a00061f7043846fa2f3a1f793649e062d8ce4e8638c29] <==
	* {"level":"info","ts":"2023-09-06T23:40:14.151044Z","caller":"traceutil/trace.go:171","msg":"trace[1811049010] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:17; response_revision:836; }","duration":"225.888161ms","start":"2023-09-06T23:40:13.925151Z","end":"2023-09-06T23:40:14.151039Z","steps":["trace[1811049010] 'agreement among raft nodes before linearized reading'  (duration: 225.795919ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-06T23:40:21.647187Z","caller":"traceutil/trace.go:171","msg":"trace[1222207829] transaction","detail":"{read_only:false; response_revision:857; number_of_response:1; }","duration":"111.122852ms","start":"2023-09-06T23:40:21.536045Z","end":"2023-09-06T23:40:21.647168Z","steps":["trace[1222207829] 'process raft request'  (duration: 110.911955ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-06T23:40:31.870939Z","caller":"traceutil/trace.go:171","msg":"trace[2093188657] linearizableReadLoop","detail":"{readStateIndex:924; appliedIndex:923; }","duration":"232.827025ms","start":"2023-09-06T23:40:31.638099Z","end":"2023-09-06T23:40:31.870926Z","steps":["trace[2093188657] 'read index received'  (duration: 232.567638ms)","trace[2093188657] 'applied index is now lower than readState.Index'  (duration: 258.89µs)"],"step_count":2}
	{"level":"info","ts":"2023-09-06T23:40:31.871172Z","caller":"traceutil/trace.go:171","msg":"trace[1414132122] transaction","detail":"{read_only:false; response_revision:899; number_of_response:1; }","duration":"258.032331ms","start":"2023-09-06T23:40:31.61313Z","end":"2023-09-06T23:40:31.871163Z","steps":["trace[1414132122] 'process raft request'  (duration: 257.704686ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-06T23:40:31.87138Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"233.28033ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10535"}
	{"level":"info","ts":"2023-09-06T23:40:31.871403Z","caller":"traceutil/trace.go:171","msg":"trace[758303370] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:899; }","duration":"233.321862ms","start":"2023-09-06T23:40:31.638076Z","end":"2023-09-06T23:40:31.871397Z","steps":["trace[758303370] 'agreement among raft nodes before linearized reading'  (duration: 233.24196ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-06T23:40:36.166587Z","caller":"traceutil/trace.go:171","msg":"trace[867963609] linearizableReadLoop","detail":"{readStateIndex:966; appliedIndex:965; }","duration":"334.163569ms","start":"2023-09-06T23:40:35.832411Z","end":"2023-09-06T23:40:36.166575Z","steps":["trace[867963609] 'read index received'  (duration: 333.972462ms)","trace[867963609] 'applied index is now lower than readState.Index'  (duration: 190.646µs)"],"step_count":2}
	{"level":"info","ts":"2023-09-06T23:40:36.166952Z","caller":"traceutil/trace.go:171","msg":"trace[472992152] transaction","detail":"{read_only:false; response_revision:940; number_of_response:1; }","duration":"478.499227ms","start":"2023-09-06T23:40:35.688438Z","end":"2023-09-06T23:40:36.166937Z","steps":["trace[472992152] 'process raft request'  (duration: 477.991047ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-06T23:40:36.16709Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-06T23:40:35.688423Z","time spent":"478.565114ms","remote":"127.0.0.1:49368","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4250,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-grnbn\" mod_revision:931 > success:<request_put:<key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-grnbn\" value_size:4178 >> failure:<request_range:<key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-grnbn\" > >"}
	{"level":"warn","ts":"2023-09-06T23:40:36.167437Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"335.034367ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:17 size:77911"}
	{"level":"info","ts":"2023-09-06T23:40:36.167469Z","caller":"traceutil/trace.go:171","msg":"trace[571363653] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:17; response_revision:940; }","duration":"335.072427ms","start":"2023-09-06T23:40:35.832388Z","end":"2023-09-06T23:40:36.16746Z","steps":["trace[571363653] 'agreement among raft nodes before linearized reading'  (duration: 334.911926ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-06T23:40:36.167494Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-06T23:40:35.832374Z","time spent":"335.112727ms","remote":"127.0.0.1:49368","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":17,"response size":77934,"request content":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" "}
	{"level":"warn","ts":"2023-09-06T23:40:36.167713Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"243.636256ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14063"}
	{"level":"info","ts":"2023-09-06T23:40:36.167808Z","caller":"traceutil/trace.go:171","msg":"trace[1622911542] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:940; }","duration":"243.735105ms","start":"2023-09-06T23:40:35.924063Z","end":"2023-09-06T23:40:36.167799Z","steps":["trace[1622911542] 'agreement among raft nodes before linearized reading'  (duration: 243.593772ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-06T23:40:36.168204Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.646289ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumes/\" range_end:\"/registry/persistentvolumes0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-06T23:40:36.168233Z","caller":"traceutil/trace.go:171","msg":"trace[1907341427] range","detail":"{range_begin:/registry/persistentvolumes/; range_end:/registry/persistentvolumes0; response_count:0; response_revision:940; }","duration":"100.677705ms","start":"2023-09-06T23:40:36.067545Z","end":"2023-09-06T23:40:36.168223Z","steps":["trace[1907341427] 'agreement among raft nodes before linearized reading'  (duration: 100.63207ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-06T23:41:00.641112Z","caller":"traceutil/trace.go:171","msg":"trace[777155810] linearizableReadLoop","detail":"{readStateIndex:1074; appliedIndex:1073; }","duration":"106.603987ms","start":"2023-09-06T23:41:00.534492Z","end":"2023-09-06T23:41:00.641096Z","steps":["trace[777155810] 'read index received'  (duration: 106.521623ms)","trace[777155810] 'applied index is now lower than readState.Index'  (duration: 81.574µs)"],"step_count":2}
	{"level":"info","ts":"2023-09-06T23:41:00.641211Z","caller":"traceutil/trace.go:171","msg":"trace[1976840056] transaction","detail":"{read_only:false; response_revision:1043; number_of_response:1; }","duration":"196.670326ms","start":"2023-09-06T23:41:00.444534Z","end":"2023-09-06T23:41:00.641204Z","steps":["trace[1976840056] 'process raft request'  (duration: 196.422165ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-06T23:41:00.641435Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.944335ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/\" range_end:\"/registry/endpointslices0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-09-06T23:41:00.64149Z","caller":"traceutil/trace.go:171","msg":"trace[1371582954] range","detail":"{range_begin:/registry/endpointslices/; range_end:/registry/endpointslices0; response_count:0; response_revision:1043; }","duration":"107.014574ms","start":"2023-09-06T23:41:00.534469Z","end":"2023-09-06T23:41:00.641483Z","steps":["trace[1371582954] 'agreement among raft nodes before linearized reading'  (duration: 106.875896ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-06T23:41:06.557129Z","caller":"traceutil/trace.go:171","msg":"trace[2084674049] linearizableReadLoop","detail":"{readStateIndex:1097; appliedIndex:1096; }","duration":"170.308429ms","start":"2023-09-06T23:41:06.386803Z","end":"2023-09-06T23:41:06.557111Z","steps":["trace[2084674049] 'read index received'  (duration: 170.184886ms)","trace[2084674049] 'applied index is now lower than readState.Index'  (duration: 123.161µs)"],"step_count":2}
	{"level":"warn","ts":"2023-09-06T23:41:06.557378Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.629572ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/csi-hostpathplugin-hfzsf\" ","response":"range_response_count:1 size:12439"}
	{"level":"info","ts":"2023-09-06T23:41:06.557418Z","caller":"traceutil/trace.go:171","msg":"trace[2056109951] range","detail":"{range_begin:/registry/pods/kube-system/csi-hostpathplugin-hfzsf; range_end:; response_count:1; response_revision:1065; }","duration":"170.7166ms","start":"2023-09-06T23:41:06.386692Z","end":"2023-09-06T23:41:06.557409Z","steps":["trace[2056109951] 'agreement among raft nodes before linearized reading'  (duration: 170.566884ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-06T23:41:06.558008Z","caller":"traceutil/trace.go:171","msg":"trace[1132676904] transaction","detail":"{read_only:false; response_revision:1065; number_of_response:1; }","duration":"379.796615ms","start":"2023-09-06T23:41:06.178202Z","end":"2023-09-06T23:41:06.557998Z","steps":["trace[1132676904] 'process raft request'  (duration: 378.825017ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-06T23:41:06.558568Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-06T23:41:06.178188Z","time spent":"380.308736ms","remote":"127.0.0.1:49386","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":537,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1044 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:450 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	
	* 
	* ==> gcp-auth [d6161de68d946c48beadd2fd926ebb024532e6174da39690e0bb9a38e48ca239] <==
	* 2023/09/06 23:41:07 GCP Auth Webhook started!
	2023/09/06 23:41:12 Ready to marshal response ...
	2023/09/06 23:41:12 Ready to write response ...
	2023/09/06 23:41:17 Ready to marshal response ...
	2023/09/06 23:41:17 Ready to write response ...
	2023/09/06 23:41:18 Ready to marshal response ...
	2023/09/06 23:41:18 Ready to write response ...
	2023/09/06 23:41:26 Ready to marshal response ...
	2023/09/06 23:41:26 Ready to write response ...
	2023/09/06 23:41:26 Ready to marshal response ...
	2023/09/06 23:41:26 Ready to write response ...
	2023/09/06 23:41:26 Ready to marshal response ...
	2023/09/06 23:41:26 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  23:41:30 up 2 min,  0 users,  load average: 2.99, 1.50, 0.58
	Linux addons-594533 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [11d32dc02c2c323f85c2f8b3523227c87a0a2d7fc45b83ef57d59c98655430ff] <==
	* W0906 23:40:22.649305       1 handler_proxy.go:93] no RequestInfo found in the context
	E0906 23:40:22.649410       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0906 23:40:22.650872       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.121.43:443/apis/metrics.k8s.io/v1beta1: Get "https://10.110.121.43:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.110.121.43:443: connect: connection refused
	I0906 23:40:22.651384       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.110.121.43:443: connect: connection refused
	I0906 23:40:22.651430       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0906 23:40:22.654369       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.121.43:443/apis/metrics.k8s.io/v1beta1: Get "https://10.110.121.43:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.110.121.43:443: connect: connection refused
	E0906 23:40:22.681316       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.121.43:443/apis/metrics.k8s.io/v1beta1: Get "https://10.110.121.43:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.110.121.43:443: connect: connection refused
	I0906 23:40:22.780951       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0906 23:40:27.047688       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0906 23:41:13.942074       1 controller.go:159] removing "v1beta1.metrics.k8s.io" from AggregationController failed with: resource not found
	I0906 23:41:17.824151       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0906 23:41:18.059597       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.17.3"}
	I0906 23:41:19.994383       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0906 23:41:20.009386       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	E0906 23:41:20.048312       1 controller.go:159] removing "v1alpha1.gadget.kinvolk.io" from AggregationController failed with: resource not found
	E0906 23:41:20.048328       1 controller.go:159] removing "v1alpha1.gadget.kinvolk.io" from AggregationController failed with: resource not found
	W0906 23:41:21.032619       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0906 23:41:23.673493       1 handler_proxy.go:137] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0906 23:41:23.673539       1 handler_proxy.go:93] no RequestInfo found in the context
	E0906 23:41:23.673569       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0906 23:41:23.673577       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0906 23:41:26.605464       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.45.12"}
	I0906 23:41:27.391507       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	
	* 
	* ==> kube-controller-manager [6c151abce636c9b3d55ded8300646a7e9f75c4899ca7b6834194bf68c90e1f04] <==
	* I0906 23:41:11.080490       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0906 23:41:11.082019       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0906 23:41:11.264500       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0906 23:41:13.747698       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/cloud-spanner-emulator-6dcc56475c" duration="4.539µs"
	I0906 23:41:13.971466       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-7c66d45ddc" duration="10.737µs"
	I0906 23:41:17.062213       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-5dcd45b5bf" duration="32.397141ms"
	I0906 23:41:17.062360       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-5dcd45b5bf" duration="76.912µs"
	E0906 23:41:21.034697       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	W0906 23:41:21.978598       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 23:41:21.978668       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0906 23:41:24.332679       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 23:41:24.332967       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0906 23:41:26.632392       1 event.go:307] "Event occurred" object="headlamp/headlamp" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set headlamp-699c48fb74 to 1"
	I0906 23:41:26.644106       1 event.go:307] "Event occurred" object="headlamp/headlamp-699c48fb74" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"headlamp-699c48fb74-\" is forbidden: error looking up service account headlamp/headlamp: serviceaccount \"headlamp\" not found"
	I0906 23:41:26.658956       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="24.336073ms"
	E0906 23:41:26.659283       1 replica_set.go:557] sync "headlamp/headlamp-699c48fb74" failed with pods "headlamp-699c48fb74-" is forbidden: error looking up service account headlamp/headlamp: serviceaccount "headlamp" not found
	I0906 23:41:26.677280       1 event.go:307] "Event occurred" object="headlamp/headlamp-699c48fb74" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: headlamp-699c48fb74-nlnjs"
	I0906 23:41:26.699935       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="40.480104ms"
	I0906 23:41:26.712531       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="12.388377ms"
	I0906 23:41:26.712981       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="152.707µs"
	I0906 23:41:26.723868       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="55.969µs"
	I0906 23:41:29.566955       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0906 23:41:30.104982       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	W0906 23:41:30.479083       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 23:41:30.479146       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [a61ae532d8087936bea7e43e0978fb3077452e7e7e82a534cd767307dd80a038] <==
	* I0906 23:39:45.106838       1 server_others.go:69] "Using iptables proxy"
	I0906 23:39:45.131366       1 node.go:141] Successfully retrieved node IP: 192.168.39.126
	I0906 23:39:45.225576       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0906 23:39:45.225623       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 23:39:45.230947       1 server_others.go:152] "Using iptables Proxier"
	I0906 23:39:45.231011       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0906 23:39:45.231469       1 server.go:846] "Version info" version="v1.28.1"
	I0906 23:39:45.231507       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 23:39:45.239970       1 config.go:97] "Starting endpoint slice config controller"
	I0906 23:39:45.240619       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0906 23:39:45.240529       1 config.go:188] "Starting service config controller"
	I0906 23:39:45.240646       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0906 23:39:45.241364       1 config.go:315] "Starting node config controller"
	I0906 23:39:45.241371       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0906 23:39:45.341215       1 shared_informer.go:318] Caches are synced for service config
	I0906 23:39:45.341281       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0906 23:39:45.341495       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [058cf62f54b6140088f509ec75d7699f2471b81916f76baa6749713947e2d378] <==
	* W0906 23:39:27.178340       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0906 23:39:27.180987       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0906 23:39:28.041522       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0906 23:39:28.041868       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0906 23:39:28.047667       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0906 23:39:28.047834       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0906 23:39:28.072440       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0906 23:39:28.072490       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0906 23:39:28.093881       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0906 23:39:28.093928       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0906 23:39:28.119200       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0906 23:39:28.119449       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0906 23:39:28.120833       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0906 23:39:28.121041       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0906 23:39:28.218976       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0906 23:39:28.219125       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0906 23:39:28.286674       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0906 23:39:28.286792       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0906 23:39:28.472397       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0906 23:39:28.472462       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0906 23:39:28.479323       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0906 23:39:28.479559       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0906 23:39:28.552277       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0906 23:39:28.552333       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0906 23:39:30.726949       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-09-06 23:38:54 UTC, ends at Wed 2023-09-06 23:41:30 UTC. --
	Sep 06 23:41:28 addons-594533 kubelet[1216]: I0906 23:41:28.436013    1216 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/3fed2905-afe7-4780-b736-a55c335bdd45-gcp-creds\") on node \"addons-594533\" DevicePath \"\""
	Sep 06 23:41:28 addons-594533 kubelet[1216]: I0906 23:41:28.595117    1216 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="3fed2905-afe7-4780-b736-a55c335bdd45" path="/var/lib/kubelet/pods/3fed2905-afe7-4780-b736-a55c335bdd45/volumes"
	Sep 06 23:41:29 addons-594533 kubelet[1216]: I0906 23:41:29.042609    1216 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rs4p\" (UniqueName: \"kubernetes.io/projected/7ecdae65-1d59-41a5-a998-19b134e95b2f-kube-api-access-5rs4p\") pod \"7ecdae65-1d59-41a5-a998-19b134e95b2f\" (UID: \"7ecdae65-1d59-41a5-a998-19b134e95b2f\") "
	Sep 06 23:41:29 addons-594533 kubelet[1216]: I0906 23:41:29.042654    1216 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7ecdae65-1d59-41a5-a998-19b134e95b2f-gcp-creds\") pod \"7ecdae65-1d59-41a5-a998-19b134e95b2f\" (UID: \"7ecdae65-1d59-41a5-a998-19b134e95b2f\") "
	Sep 06 23:41:29 addons-594533 kubelet[1216]: I0906 23:41:29.042869    1216 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^dc1b96e9-4d0e-11ee-88df-1668122136b9\") pod \"7ecdae65-1d59-41a5-a998-19b134e95b2f\" (UID: \"7ecdae65-1d59-41a5-a998-19b134e95b2f\") "
	Sep 06 23:41:29 addons-594533 kubelet[1216]: I0906 23:41:29.044062    1216 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ecdae65-1d59-41a5-a998-19b134e95b2f-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "7ecdae65-1d59-41a5-a998-19b134e95b2f" (UID: "7ecdae65-1d59-41a5-a998-19b134e95b2f"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 06 23:41:29 addons-594533 kubelet[1216]: I0906 23:41:29.047102    1216 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ecdae65-1d59-41a5-a998-19b134e95b2f-kube-api-access-5rs4p" (OuterVolumeSpecName: "kube-api-access-5rs4p") pod "7ecdae65-1d59-41a5-a998-19b134e95b2f" (UID: "7ecdae65-1d59-41a5-a998-19b134e95b2f"). InnerVolumeSpecName "kube-api-access-5rs4p". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 06 23:41:29 addons-594533 kubelet[1216]: I0906 23:41:29.060490    1216 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^dc1b96e9-4d0e-11ee-88df-1668122136b9" (OuterVolumeSpecName: "task-pv-storage") pod "7ecdae65-1d59-41a5-a998-19b134e95b2f" (UID: "7ecdae65-1d59-41a5-a998-19b134e95b2f"). InnerVolumeSpecName "pvc-49aa225f-158b-482f-8ebc-0ad077a27c95". PluginName "kubernetes.io/csi", VolumeGidValue ""
	Sep 06 23:41:29 addons-594533 kubelet[1216]: I0906 23:41:29.143702    1216 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5rs4p\" (UniqueName: \"kubernetes.io/projected/7ecdae65-1d59-41a5-a998-19b134e95b2f-kube-api-access-5rs4p\") on node \"addons-594533\" DevicePath \"\""
	Sep 06 23:41:29 addons-594533 kubelet[1216]: I0906 23:41:29.143855    1216 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7ecdae65-1d59-41a5-a998-19b134e95b2f-gcp-creds\") on node \"addons-594533\" DevicePath \"\""
	Sep 06 23:41:29 addons-594533 kubelet[1216]: I0906 23:41:29.143908    1216 reconciler_common.go:293] "operationExecutor.UnmountDevice started for volume \"pvc-49aa225f-158b-482f-8ebc-0ad077a27c95\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^dc1b96e9-4d0e-11ee-88df-1668122136b9\") on node \"addons-594533\" "
	Sep 06 23:41:29 addons-594533 kubelet[1216]: I0906 23:41:29.160570    1216 operation_generator.go:992] UnmountDevice succeeded for volume "pvc-49aa225f-158b-482f-8ebc-0ad077a27c95" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^dc1b96e9-4d0e-11ee-88df-1668122136b9") on node "addons-594533"
	Sep 06 23:41:29 addons-594533 kubelet[1216]: I0906 23:41:29.162905    1216 scope.go:117] "RemoveContainer" containerID="b575e05c104ede0837594807b52f2388530355c4bc92a175d2aec4d7468ff207"
	Sep 06 23:41:29 addons-594533 kubelet[1216]: I0906 23:41:29.245506    1216 reconciler_common.go:300] "Volume detached for volume \"pvc-49aa225f-158b-482f-8ebc-0ad077a27c95\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^dc1b96e9-4d0e-11ee-88df-1668122136b9\") on node \"addons-594533\" DevicePath \"\""
	Sep 06 23:41:29 addons-594533 kubelet[1216]: I0906 23:41:29.315286    1216 scope.go:117] "RemoveContainer" containerID="b575e05c104ede0837594807b52f2388530355c4bc92a175d2aec4d7468ff207"
	Sep 06 23:41:29 addons-594533 kubelet[1216]: E0906 23:41:29.321102    1216 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b575e05c104ede0837594807b52f2388530355c4bc92a175d2aec4d7468ff207\": not found" containerID="b575e05c104ede0837594807b52f2388530355c4bc92a175d2aec4d7468ff207"
	Sep 06 23:41:29 addons-594533 kubelet[1216]: I0906 23:41:29.322001    1216 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b575e05c104ede0837594807b52f2388530355c4bc92a175d2aec4d7468ff207"} err="failed to get container status \"b575e05c104ede0837594807b52f2388530355c4bc92a175d2aec4d7468ff207\": rpc error: code = NotFound desc = an error occurred when try to find container \"b575e05c104ede0837594807b52f2388530355c4bc92a175d2aec4d7468ff207\": not found"
	Sep 06 23:41:29 addons-594533 kubelet[1216]: I0906 23:41:29.322030    1216 scope.go:117] "RemoveContainer" containerID="3cabc6d3c238493535ce6e268a8dd3586d2ee83bae1dff17cb93e3e17a76535d"
	Sep 06 23:41:30 addons-594533 kubelet[1216]: I0906 23:41:30.558958    1216 scope.go:117] "RemoveContainer" containerID="415cdbe0455071bc4a45fdb9c29ce343536b1407158948635658e8096d280a6c"
	Sep 06 23:41:30 addons-594533 kubelet[1216]: I0906 23:41:30.568364    1216 scope.go:117] "RemoveContainer" containerID="7e2c44c6e0922219413c40a2db7dc773bc3ccc6199f3ce12d58962a3b85a619b"
	Sep 06 23:41:30 addons-594533 kubelet[1216]: I0906 23:41:30.598356    1216 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7ecdae65-1d59-41a5-a998-19b134e95b2f" path="/var/lib/kubelet/pods/7ecdae65-1d59-41a5-a998-19b134e95b2f/volumes"
	Sep 06 23:41:30 addons-594533 kubelet[1216]: E0906 23:41:30.657670    1216 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 06 23:41:30 addons-594533 kubelet[1216]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 23:41:30 addons-594533 kubelet[1216]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 23:41:30 addons-594533 kubelet[1216]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	* 
	* ==> storage-provisioner [706f82c1f995b31c35fa0b7541744315b7a27c49e9bcd60c4ef51d8fd1e2331b] <==
	* I0906 23:39:55.399582       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 23:39:55.426189       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 23:39:55.426222       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 23:39:55.459141       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 23:39:55.459263       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-594533_4580f15a-c57d-4b29-9bef-0a476a50f35d!
	I0906 23:39:55.468653       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bcf43b54-6da4-42ad-893b-62884bf79096", APIVersion:"v1", ResourceVersion:"745", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-594533_4580f15a-c57d-4b29-9bef-0a476a50f35d became leader
	I0906 23:39:55.563157       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-594533_4580f15a-c57d-4b29-9bef-0a476a50f35d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-594533 -n addons-594533
helpers_test.go:261: (dbg) Run:  kubectl --context addons-594533 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: headlamp-699c48fb74-nlnjs ingress-nginx-admission-create-c5b27 ingress-nginx-admission-patch-grnbn
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-594533 describe pod headlamp-699c48fb74-nlnjs ingress-nginx-admission-create-c5b27 ingress-nginx-admission-patch-grnbn
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-594533 describe pod headlamp-699c48fb74-nlnjs ingress-nginx-admission-create-c5b27 ingress-nginx-admission-patch-grnbn: exit status 1 (117.927451ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "headlamp-699c48fb74-nlnjs" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-c5b27" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-grnbn" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-594533 describe pod headlamp-699c48fb74-nlnjs ingress-nginx-admission-create-c5b27 ingress-nginx-admission-patch-grnbn: exit status 1
--- FAIL: TestAddons/parallel/Registry (23.38s)

                                                
                                    

Test pass (265/302)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 47.76
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.05
10 TestDownloadOnly/v1.28.1/json-events 14.88
11 TestDownloadOnly/v1.28.1/preload-exists 0
15 TestDownloadOnly/v1.28.1/LogsDuration 0.05
16 TestDownloadOnly/DeleteAll 0.12
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.11
19 TestBinaryMirror 0.52
20 TestOffline 134.54
22 TestAddons/Setup 146.28
25 TestAddons/parallel/Ingress 28.22
26 TestAddons/parallel/InspektorGadget 10.99
27 TestAddons/parallel/MetricsServer 6.03
28 TestAddons/parallel/HelmTiller 11.54
30 TestAddons/parallel/CSI 53.27
31 TestAddons/parallel/Headlamp 14.5
32 TestAddons/parallel/CloudSpanner 5.7
35 TestAddons/serial/GCPAuth/Namespaces 0.16
36 TestAddons/StoppedEnableDisable 92.68
37 TestCertOptions 76.84
38 TestCertExpiration 270.9
40 TestForceSystemdFlag 50.94
41 TestForceSystemdEnv 74.82
43 TestKVMDriverInstallOrUpdate 3.92
47 TestErrorSpam/setup 52.54
48 TestErrorSpam/start 0.31
49 TestErrorSpam/status 0.7
50 TestErrorSpam/pause 1.35
51 TestErrorSpam/unpause 1.5
52 TestErrorSpam/stop 1.43
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 61.45
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 5.34
59 TestFunctional/serial/KubeContext 0.04
60 TestFunctional/serial/KubectlGetPods 0.08
63 TestFunctional/serial/CacheCmd/cache/add_remote 3.57
64 TestFunctional/serial/CacheCmd/cache/add_local 2.51
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.04
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
68 TestFunctional/serial/CacheCmd/cache/cache_reload 1.88
69 TestFunctional/serial/CacheCmd/cache/delete 0.08
70 TestFunctional/serial/MinikubeKubectlCmd 0.1
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
72 TestFunctional/serial/ExtraConfig 38.5
73 TestFunctional/serial/ComponentHealth 0.06
74 TestFunctional/serial/LogsCmd 1.21
75 TestFunctional/serial/LogsFileCmd 1.18
76 TestFunctional/serial/InvalidService 4.4
78 TestFunctional/parallel/ConfigCmd 0.29
79 TestFunctional/parallel/DashboardCmd 29.66
80 TestFunctional/parallel/DryRun 0.25
81 TestFunctional/parallel/InternationalLanguage 0.13
82 TestFunctional/parallel/StatusCmd 0.95
86 TestFunctional/parallel/ServiceCmdConnect 7.48
87 TestFunctional/parallel/AddonsCmd 0.1
88 TestFunctional/parallel/PersistentVolumeClaim 48.92
90 TestFunctional/parallel/SSHCmd 0.36
91 TestFunctional/parallel/CpCmd 0.85
92 TestFunctional/parallel/MySQL 26.02
93 TestFunctional/parallel/FileSync 0.23
94 TestFunctional/parallel/CertSync 1.29
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.43
102 TestFunctional/parallel/License 0.6
103 TestFunctional/parallel/ServiceCmd/DeployApp 10.22
104 TestFunctional/parallel/Version/short 0.04
105 TestFunctional/parallel/Version/components 0.86
106 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
107 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
108 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
109 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
110 TestFunctional/parallel/ImageCommands/ImageBuild 5.27
111 TestFunctional/parallel/ImageCommands/Setup 2.1
112 TestFunctional/parallel/ProfileCmd/profile_not_create 0.26
113 TestFunctional/parallel/ProfileCmd/profile_list 0.29
114 TestFunctional/parallel/ProfileCmd/profile_json_output 0.32
115 TestFunctional/parallel/MountCmd/any-port 9.73
116 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.96
117 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.4
118 TestFunctional/parallel/ServiceCmd/List 0.27
119 TestFunctional/parallel/ServiceCmd/JSONOutput 0.3
120 TestFunctional/parallel/ServiceCmd/HTTPS 0.3
121 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.98
122 TestFunctional/parallel/ServiceCmd/Format 0.27
123 TestFunctional/parallel/ServiceCmd/URL 0.28
124 TestFunctional/parallel/MountCmd/specific-port 1.96
134 TestFunctional/parallel/MountCmd/VerifyCleanup 1.61
135 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
136 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
137 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.1
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 3
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.86
142 TestFunctional/delete_addon-resizer_images 0.06
143 TestFunctional/delete_my-image_image 0.01
144 TestFunctional/delete_minikube_cached_images 0.01
148 TestIngressAddonLegacy/StartLegacyK8sCluster 86.76
150 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.44
151 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.59
152 TestIngressAddonLegacy/serial/ValidateIngressAddons 34.87
155 TestJSONOutput/start/Command 63.19
156 TestJSONOutput/start/Audit 0
158 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
161 TestJSONOutput/pause/Command 0.59
162 TestJSONOutput/pause/Audit 0
164 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/unpause/Command 0.57
168 TestJSONOutput/unpause/Audit 0
170 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/stop/Command 7.08
174 TestJSONOutput/stop/Audit 0
176 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
178 TestErrorJSONOutput 0.17
183 TestMainNoArgs 0.04
184 TestMinikubeProfile 100.02
187 TestMountStart/serial/StartWithMountFirst 28.87
188 TestMountStart/serial/VerifyMountFirst 0.36
189 TestMountStart/serial/StartWithMountSecond 26.43
190 TestMountStart/serial/VerifyMountSecond 0.36
191 TestMountStart/serial/DeleteFirst 0.84
192 TestMountStart/serial/VerifyMountPostDelete 0.37
193 TestMountStart/serial/Stop 1.17
194 TestMountStart/serial/RestartStopped 24.31
195 TestMountStart/serial/VerifyMountPostStop 0.38
198 TestMultiNode/serial/FreshStart2Nodes 128.62
199 TestMultiNode/serial/DeployApp2Nodes 5.29
200 TestMultiNode/serial/PingHostFrom2Pods 0.79
201 TestMultiNode/serial/AddNode 42.18
202 TestMultiNode/serial/ProfileList 0.2
203 TestMultiNode/serial/CopyFile 7.04
204 TestMultiNode/serial/StopNode 2.13
205 TestMultiNode/serial/StartAfterStop 27.48
206 TestMultiNode/serial/RestartKeepsNodes 312.66
207 TestMultiNode/serial/DeleteNode 1.67
208 TestMultiNode/serial/StopMultiNode 183.62
209 TestMultiNode/serial/RestartMultiNode 92.33
210 TestMultiNode/serial/ValidateNameConflict 50.23
215 TestPreload 276.32
217 TestScheduledStopUnix 119.58
221 TestRunningBinaryUpgrade 232.13
223 TestKubernetesUpgrade 176.6
226 TestStoppedBinaryUpgrade/Setup 2.47
233 TestStoppedBinaryUpgrade/Upgrade 217.24
235 TestPause/serial/Start 67.98
237 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
238 TestNoKubernetes/serial/StartWithK8s 69.81
239 TestPause/serial/SecondStartNoReconfiguration 38.71
240 TestStoppedBinaryUpgrade/MinikubeLogs 1.43
248 TestNetworkPlugins/group/false 4.43
252 TestPause/serial/Pause 0.81
253 TestPause/serial/VerifyStatus 0.28
254 TestPause/serial/Unpause 0.71
255 TestNoKubernetes/serial/StartWithStopK8s 59.66
256 TestPause/serial/PauseAgain 0.72
257 TestPause/serial/DeletePaused 0.97
258 TestPause/serial/VerifyDeletedResources 0.23
259 TestNoKubernetes/serial/Start 57.01
261 TestStartStop/group/old-k8s-version/serial/FirstStart 395.21
263 TestStartStop/group/no-preload/serial/FirstStart 143.15
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
265 TestNoKubernetes/serial/ProfileList 0.7
266 TestNoKubernetes/serial/Stop 1.16
267 TestNoKubernetes/serial/StartNoArgs 70.94
268 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
270 TestStartStop/group/embed-certs/serial/FirstStart 103.3
271 TestStartStop/group/no-preload/serial/DeployApp 9.48
272 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.27
273 TestStartStop/group/no-preload/serial/Stop 92.05
275 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 63.4
276 TestStartStop/group/embed-certs/serial/DeployApp 10.49
277 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 3.09
278 TestStartStop/group/embed-certs/serial/Stop 92.17
279 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.41
280 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.08
281 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.79
282 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.16
283 TestStartStop/group/no-preload/serial/SecondStart 307.42
284 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
285 TestStartStop/group/embed-certs/serial/SecondStart 306.74
286 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
287 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 330.21
288 TestStartStop/group/old-k8s-version/serial/DeployApp 11.45
289 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.8
290 TestStartStop/group/old-k8s-version/serial/Stop 91.89
291 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
292 TestStartStop/group/old-k8s-version/serial/SecondStart 457.67
293 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.02
294 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
295 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
296 TestStartStop/group/no-preload/serial/Pause 2.57
298 TestStartStop/group/newest-cni/serial/FirstStart 63.04
299 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
300 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
301 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
302 TestStartStop/group/embed-certs/serial/Pause 2.57
303 TestNetworkPlugins/group/auto/Start 101.39
304 TestStartStop/group/newest-cni/serial/DeployApp 0
305 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.45
306 TestStartStop/group/newest-cni/serial/Stop 2.23
307 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.27
308 TestStartStop/group/newest-cni/serial/SecondStart 50.46
309 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 16.02
310 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
311 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
312 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.65
313 TestNetworkPlugins/group/kindnet/Start 76.99
314 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
315 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
316 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
317 TestStartStop/group/newest-cni/serial/Pause 2.4
318 TestNetworkPlugins/group/enable-default-cni/Start 127.99
319 TestNetworkPlugins/group/auto/KubeletFlags 0.19
320 TestNetworkPlugins/group/auto/NetCatPod 9.39
321 TestNetworkPlugins/group/auto/DNS 0.2
322 TestNetworkPlugins/group/auto/Localhost 0.53
323 TestNetworkPlugins/group/auto/HairPin 0.15
324 TestNetworkPlugins/group/flannel/Start 92.55
325 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
326 TestNetworkPlugins/group/kindnet/KubeletFlags 0.2
327 TestNetworkPlugins/group/kindnet/NetCatPod 9.38
328 TestNetworkPlugins/group/kindnet/DNS 0.23
329 TestNetworkPlugins/group/kindnet/Localhost 0.22
330 TestNetworkPlugins/group/kindnet/HairPin 0.18
331 TestNetworkPlugins/group/calico/Start 99.78
332 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.2
333 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.38
334 TestNetworkPlugins/group/flannel/ControllerPod 5.03
335 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
336 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
337 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
338 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
339 TestNetworkPlugins/group/flannel/NetCatPod 11.48
340 TestNetworkPlugins/group/flannel/DNS 0.21
341 TestNetworkPlugins/group/flannel/Localhost 0.17
342 TestNetworkPlugins/group/flannel/HairPin 0.17
343 TestNetworkPlugins/group/bridge/Start 106.55
344 TestNetworkPlugins/group/custom-flannel/Start 102.24
345 TestNetworkPlugins/group/calico/ControllerPod 5.03
346 TestNetworkPlugins/group/calico/KubeletFlags 0.21
347 TestNetworkPlugins/group/calico/NetCatPod 10.43
348 TestNetworkPlugins/group/calico/DNS 0.17
349 TestNetworkPlugins/group/calico/Localhost 0.16
350 TestNetworkPlugins/group/calico/HairPin 0.18
351 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
352 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
353 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
354 TestStartStop/group/old-k8s-version/serial/Pause 2.92
355 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
356 TestNetworkPlugins/group/bridge/NetCatPod 10.3
357 TestNetworkPlugins/group/bridge/DNS 0.2
358 TestNetworkPlugins/group/bridge/Localhost 0.14
359 TestNetworkPlugins/group/bridge/HairPin 0.14
360 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
361 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.38
362 TestNetworkPlugins/group/custom-flannel/DNS 0.17
363 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
364 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
x
+
TestDownloadOnly/v1.16.0/json-events (47.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-783127 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-783127 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (47.758844754s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (47.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-783127
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-783127: exit status 85 (54.153405ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-783127 | jenkins | v1.31.2 | 06 Sep 23 23:37 UTC |          |
	|         | -p download-only-783127        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/06 23:37:38
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 23:37:38.341608   13716 out.go:296] Setting OutFile to fd 1 ...
	I0906 23:37:38.341736   13716 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 23:37:38.341748   13716 out.go:309] Setting ErrFile to fd 2...
	I0906 23:37:38.341755   13716 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 23:37:38.341959   13716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6521/.minikube/bin
	W0906 23:37:38.342092   13716 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17174-6521/.minikube/config/config.json: open /home/jenkins/minikube-integration/17174-6521/.minikube/config/config.json: no such file or directory
	I0906 23:37:38.342631   13716 out.go:303] Setting JSON to true
	I0906 23:37:38.343393   13716 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":1205,"bootTime":1694042254,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 23:37:38.343440   13716 start.go:138] virtualization: kvm guest
	I0906 23:37:38.345760   13716 out.go:97] [download-only-783127] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0906 23:37:38.347263   13716 out.go:169] MINIKUBE_LOCATION=17174
	W0906 23:37:38.345861   13716 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17174-6521/.minikube/cache/preloaded-tarball: no such file or directory
	I0906 23:37:38.345911   13716 notify.go:220] Checking for updates...
	I0906 23:37:38.349711   13716 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 23:37:38.350834   13716 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17174-6521/kubeconfig
	I0906 23:37:38.351952   13716 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6521/.minikube
	I0906 23:37:38.353221   13716 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0906 23:37:38.355853   13716 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0906 23:37:38.356097   13716 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 23:37:38.469953   13716 out.go:97] Using the kvm2 driver based on user configuration
	I0906 23:37:38.470014   13716 start.go:298] selected driver: kvm2
	I0906 23:37:38.470031   13716 start.go:902] validating driver "kvm2" against <nil>
	I0906 23:37:38.470320   13716 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 23:37:38.470415   13716 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17174-6521/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0906 23:37:38.483491   13716 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0906 23:37:38.483551   13716 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 23:37:38.484049   13716 start_flags.go:384] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0906 23:37:38.484199   13716 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0906 23:37:38.484226   13716 cni.go:84] Creating CNI manager for ""
	I0906 23:37:38.484236   13716 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0906 23:37:38.484243   13716 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 23:37:38.484255   13716 start_flags.go:321] config:
	{Name:download-only-783127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-783127 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 23:37:38.484428   13716 iso.go:125] acquiring lock: {Name:mk888fe4d8846e15e5fb0d4239da695971e7f3d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 23:37:38.486327   13716 out.go:97] Downloading VM boot image ...
	I0906 23:37:38.486363   13716 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17174-6521/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso
	I0906 23:37:47.770522   13716 out.go:97] Starting control plane node download-only-783127 in cluster download-only-783127
	I0906 23:37:47.770542   13716 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0906 23:37:47.880567   13716 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0906 23:37:47.880595   13716 cache.go:57] Caching tarball of preloaded images
	I0906 23:37:47.880739   13716 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0906 23:37:47.882507   13716 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0906 23:37:47.882533   13716 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0906 23:37:47.997984   13716 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/17174-6521/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0906 23:38:00.378734   13716 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0906 23:38:00.378831   13716 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17174-6521/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0906 23:38:01.232446   13716 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0906 23:38:01.232799   13716 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/download-only-783127/config.json ...
	I0906 23:38:01.232831   13716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/download-only-783127/config.json: {Name:mk74b8f4c7d6159f871a2e1ae8e14be1a0657129 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:38:01.232997   13716 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0906 23:38:01.233169   13716 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17174-6521/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-783127"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/json-events (14.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-783127 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-783127 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (14.878398338s)
--- PASS: TestDownloadOnly/v1.28.1/json-events (14.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/preload-exists
--- PASS: TestDownloadOnly/v1.28.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-783127
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-783127: exit status 85 (51.957659ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-783127 | jenkins | v1.31.2 | 06 Sep 23 23:37 UTC |          |
	|         | -p download-only-783127        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-783127 | jenkins | v1.31.2 | 06 Sep 23 23:38 UTC |          |
	|         | -p download-only-783127        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.1   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/06 23:38:26
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 23:38:26.156139   13868 out.go:296] Setting OutFile to fd 1 ...
	I0906 23:38:26.156237   13868 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 23:38:26.156245   13868 out.go:309] Setting ErrFile to fd 2...
	I0906 23:38:26.156249   13868 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 23:38:26.156441   13868 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6521/.minikube/bin
	W0906 23:38:26.156544   13868 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17174-6521/.minikube/config/config.json: open /home/jenkins/minikube-integration/17174-6521/.minikube/config/config.json: no such file or directory
	I0906 23:38:26.156927   13868 out.go:303] Setting JSON to true
	I0906 23:38:26.157631   13868 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":1252,"bootTime":1694042254,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 23:38:26.157684   13868 start.go:138] virtualization: kvm guest
	I0906 23:38:26.159737   13868 out.go:97] [download-only-783127] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0906 23:38:26.161175   13868 out.go:169] MINIKUBE_LOCATION=17174
	I0906 23:38:26.159864   13868 notify.go:220] Checking for updates...
	I0906 23:38:26.163659   13868 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 23:38:26.164870   13868 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17174-6521/kubeconfig
	I0906 23:38:26.166160   13868 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6521/.minikube
	I0906 23:38:26.167390   13868 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0906 23:38:26.170562   13868 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0906 23:38:26.170954   13868 config.go:182] Loaded profile config "download-only-783127": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W0906 23:38:26.170999   13868 start.go:810] api.Load failed for download-only-783127: filestore "download-only-783127": Docker machine "download-only-783127" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0906 23:38:26.171087   13868 driver.go:373] Setting default libvirt URI to qemu:///system
	W0906 23:38:26.171132   13868 start.go:810] api.Load failed for download-only-783127: filestore "download-only-783127": Docker machine "download-only-783127" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0906 23:38:26.201751   13868 out.go:97] Using the kvm2 driver based on existing profile
	I0906 23:38:26.201774   13868 start.go:298] selected driver: kvm2
	I0906 23:38:26.201785   13868 start.go:902] validating driver "kvm2" against &{Name:download-only-783127 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-783127 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 23:38:26.202171   13868 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 23:38:26.202241   13868 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17174-6521/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0906 23:38:26.215812   13868 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0906 23:38:26.216487   13868 cni.go:84] Creating CNI manager for ""
	I0906 23:38:26.216506   13868 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0906 23:38:26.216519   13868 start_flags.go:321] config:
	{Name:download-only-783127 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:download-only-783127 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 23:38:26.216654   13868 iso.go:125] acquiring lock: {Name:mk888fe4d8846e15e5fb0d4239da695971e7f3d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 23:38:26.218150   13868 out.go:97] Starting control plane node download-only-783127 in cluster download-only-783127
	I0906 23:38:26.218166   13868 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime containerd
	I0906 23:38:26.323980   13868 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.1/preloaded-images-k8s-v18-v1.28.1-containerd-overlay2-amd64.tar.lz4
	I0906 23:38:26.324000   13868 cache.go:57] Caching tarball of preloaded images
	I0906 23:38:26.324207   13868 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime containerd
	I0906 23:38:26.325997   13868 out.go:97] Downloading Kubernetes v1.28.1 preload ...
	I0906 23:38:26.326013   13868 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.1-containerd-overlay2-amd64.tar.lz4 ...
	I0906 23:38:26.442085   13868 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.1/preloaded-images-k8s-v18-v1.28.1-containerd-overlay2-amd64.tar.lz4?checksum=md5:923ead224190762fa2aa551036672b63 -> /home/jenkins/minikube-integration/17174-6521/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-containerd-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-783127"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.1/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-783127
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.52s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-706469 --alsologtostderr --binary-mirror http://127.0.0.1:38425 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-706469" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-706469
--- PASS: TestBinaryMirror (0.52s)

                                                
                                    
x
+
TestOffline (134.54s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-139534 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-139534 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (2m13.498253882s)
helpers_test.go:175: Cleaning up "offline-containerd-139534" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-139534
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-139534: (1.044032384s)
--- PASS: TestOffline (134.54s)

                                                
                                    
x
+
TestAddons/Setup (146.28s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-594533 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-594533 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m26.283905959s)
--- PASS: TestAddons/Setup (146.28s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (28.22s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-594533 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context addons-594533 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (3.128769624s)
addons_test.go:208: (dbg) Run:  kubectl --context addons-594533 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-594533 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [4acaf8ad-44f8-4134-8844-cd3d45368625] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [4acaf8ad-44f8-4134-8844-cd3d45368625] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.022637668s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p addons-594533 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context addons-594533 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p addons-594533 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.39.126
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p addons-594533 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p addons-594533 addons disable ingress-dns --alsologtostderr -v=1: (2.388135061s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p addons-594533 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p addons-594533 addons disable ingress --alsologtostderr -v=1: (7.770425954s)
--- PASS: TestAddons/parallel/Ingress (28.22s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.99s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-4z76j" [5912a0f0-e51f-4ab5-97b2-f297d75b384a] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.013380848s
addons_test.go:817: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-594533
addons_test.go:817: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-594533: (5.973865222s)
--- PASS: TestAddons/parallel/InspektorGadget (10.99s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.03s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 25.442601ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-74zf4" [5fe11905-0b57-4b88-8c59-c997401fadee] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.026236542s
addons_test.go:391: (dbg) Run:  kubectl --context addons-594533 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p addons-594533 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.03s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.54s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 11.475426ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-b2jtq" [2dc7a85e-be48-41ae-a2b0-7fc4b48cdf5c] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.032018246s
addons_test.go:449: (dbg) Run:  kubectl --context addons-594533 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-594533 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.835579416s)
addons_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p addons-594533 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.54s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.27s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 26.50927ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-594533 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-594533 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-594533 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-594533 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-594533 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-594533 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-594533 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [7ecdae65-1d59-41a5-a998-19b134e95b2f] Pending
helpers_test.go:344: "task-pv-pod" [7ecdae65-1d59-41a5-a998-19b134e95b2f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [7ecdae65-1d59-41a5-a998-19b134e95b2f] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.024219076s
addons_test.go:560: (dbg) Run:  kubectl --context addons-594533 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-594533 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-594533 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-594533 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-594533 delete pod task-pv-pod
addons_test.go:576: (dbg) Run:  kubectl --context addons-594533 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-594533 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-594533 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-594533 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-594533 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-594533 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-594533 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-594533 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-594533 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-594533 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-594533 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-594533 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-594533 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-594533 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-594533 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-594533 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-594533 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-594533 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-594533 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [725752a6-6f0f-403f-9c3d-a98ee8a223e5] Pending
helpers_test.go:344: "task-pv-pod-restore" [725752a6-6f0f-403f-9c3d-a98ee8a223e5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [725752a6-6f0f-403f-9c3d-a98ee8a223e5] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.022400907s
addons_test.go:602: (dbg) Run:  kubectl --context addons-594533 delete pod task-pv-pod-restore
addons_test.go:606: (dbg) Run:  kubectl --context addons-594533 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-594533 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-amd64 -p addons-594533 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-amd64 -p addons-594533 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.726208892s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-amd64 -p addons-594533 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (53.27s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-594533 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-594533 --alsologtostderr -v=1: (1.462576397s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-699c48fb74-nlnjs" [572632e4-7104-432a-9fb6-f89e0ea69309] Pending
helpers_test.go:344: "headlamp-699c48fb74-nlnjs" [572632e4-7104-432a-9fb6-f89e0ea69309] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-699c48fb74-nlnjs" [572632e4-7104-432a-9fb6-f89e0ea69309] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.034363249s
--- PASS: TestAddons/parallel/Headlamp (14.50s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6dcc56475c-jjbpg" [210ba773-d722-4b20-98b1-f2c7da8b0fef] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.016892356s
addons_test.go:836: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-594533
--- PASS: TestAddons/parallel/CloudSpanner (5.70s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-594533 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-594533 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.68s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-594533
addons_test.go:148: (dbg) Done: out/minikube-linux-amd64 stop -p addons-594533: (1m32.431575028s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-594533
addons_test.go:156: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-594533
addons_test.go:161: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-594533
--- PASS: TestAddons/StoppedEnableDisable (92.68s)

                                                
                                    
x
+
TestCertOptions (76.84s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-095701 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-095701 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m15.374361675s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-095701 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-095701 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-095701 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-095701" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-095701
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-095701: (1.027859879s)
--- PASS: TestCertOptions (76.84s)

                                                
                                    
x
+
TestCertExpiration (270.9s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-099160 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
E0907 00:19:24.363881   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/ingress-addon-legacy-757297/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-099160 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m24.4849775s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-099160 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-099160 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (5.412756034s)
helpers_test.go:175: Cleaning up "cert-expiration-099160" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-099160
--- PASS: TestCertExpiration (270.90s)

                                                
                                    
x
+
TestForceSystemdFlag (50.94s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-847209 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-847209 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (49.721667926s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-847209 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-847209" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-847209
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-847209: (1.007957819s)
--- PASS: TestForceSystemdFlag (50.94s)

                                                
                                    
x
+
TestForceSystemdEnv (74.82s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-451717 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E0907 00:19:11.254932   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-451717 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m13.671841815s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-451717 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-451717" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-451717
--- PASS: TestForceSystemdEnv (74.82s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.92s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.92s)

                                                
                                    
x
+
TestErrorSpam/setup (52.54s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-139077 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-139077 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-139077 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-139077 --driver=kvm2  --container-runtime=containerd: (52.535247534s)
--- PASS: TestErrorSpam/setup (52.54s)

                                                
                                    
x
+
TestErrorSpam/start (0.31s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-139077 --log_dir /tmp/nospam-139077 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-139077 --log_dir /tmp/nospam-139077 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-139077 --log_dir /tmp/nospam-139077 start --dry-run
--- PASS: TestErrorSpam/start (0.31s)

                                                
                                    
x
+
TestErrorSpam/status (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-139077 --log_dir /tmp/nospam-139077 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-139077 --log_dir /tmp/nospam-139077 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-139077 --log_dir /tmp/nospam-139077 status
--- PASS: TestErrorSpam/status (0.70s)

                                                
                                    
x
+
TestErrorSpam/pause (1.35s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-139077 --log_dir /tmp/nospam-139077 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-139077 --log_dir /tmp/nospam-139077 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-139077 --log_dir /tmp/nospam-139077 pause
--- PASS: TestErrorSpam/pause (1.35s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-139077 --log_dir /tmp/nospam-139077 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-139077 --log_dir /tmp/nospam-139077 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-139077 --log_dir /tmp/nospam-139077 unpause
--- PASS: TestErrorSpam/unpause (1.50s)

                                                
                                    
x
+
TestErrorSpam/stop (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-139077 --log_dir /tmp/nospam-139077 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-139077 --log_dir /tmp/nospam-139077 stop: (1.301978718s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-139077 --log_dir /tmp/nospam-139077 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-139077 --log_dir /tmp/nospam-139077 stop
--- PASS: TestErrorSpam/stop (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17174-6521/.minikube/files/etc/test/nested/copy/13704/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (61.45s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-369762 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-369762 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m1.445474769s)
--- PASS: TestFunctional/serial/StartWithProxy (61.45s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.34s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-369762 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-369762 --alsologtostderr -v=8: (5.336481495s)
functional_test.go:659: soft start took 5.337045569s for "functional-369762" cluster.
--- PASS: TestFunctional/serial/SoftStart (5.34s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-369762 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-369762 cache add registry.k8s.io/pause:3.1: (1.103252551s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-369762 cache add registry.k8s.io/pause:3.3: (1.217193902s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-369762 cache add registry.k8s.io/pause:latest: (1.247402954s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.51s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-369762 /tmp/TestFunctionalserialCacheCmdcacheadd_local464792304/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 cache add minikube-local-cache-test:functional-369762
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-369762 cache add minikube-local-cache-test:functional-369762: (2.203940186s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 cache delete minikube-local-cache-test:functional-369762
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-369762
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.51s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.88s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-369762 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (197.605653ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-369762 cache reload: (1.256880738s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.88s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 kubectl -- --context functional-369762 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-369762 get pods
E0906 23:46:08.208210   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/client.crt: no such file or directory
E0906 23:46:08.214029   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/client.crt: no such file or directory
E0906 23:46:08.224426   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/client.crt: no such file or directory
E0906 23:46:08.244711   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/client.crt: no such file or directory
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.5s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-369762 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0906 23:46:08.284873   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/client.crt: no such file or directory
E0906 23:46:08.365643   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/client.crt: no such file or directory
E0906 23:46:08.525841   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/client.crt: no such file or directory
E0906 23:46:08.846416   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/client.crt: no such file or directory
E0906 23:46:09.487357   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/client.crt: no such file or directory
E0906 23:46:10.767841   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/client.crt: no such file or directory
E0906 23:46:13.328926   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/client.crt: no such file or directory
E0906 23:46:18.449602   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/client.crt: no such file or directory
E0906 23:46:28.690135   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-369762 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.5018571s)
functional_test.go:757: restart took 38.501949675s for "functional-369762" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.50s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-369762 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-369762 logs: (1.213896767s)
--- PASS: TestFunctional/serial/LogsCmd (1.21s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 logs --file /tmp/TestFunctionalserialLogsFileCmd2919209355/001/logs.txt
E0906 23:46:49.170337   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/client.crt: no such file or directory
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-369762 logs --file /tmp/TestFunctionalserialLogsFileCmd2919209355/001/logs.txt: (1.176323393s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.18s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.4s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-369762 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-369762
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-369762: exit status 115 (272.07334ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.139:32276 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-369762 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.40s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-369762 config get cpus: exit status 14 (48.996324ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-369762 config get cpus: exit status 14 (38.084274ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (29.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-369762 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-369762 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 20595: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (29.66s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-369762 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-369762 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (122.467444ms)

                                                
                                                
-- stdout --
	* [functional-369762] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17174
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17174-6521/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6521/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 23:47:10.227111   20297 out.go:296] Setting OutFile to fd 1 ...
	I0906 23:47:10.227263   20297 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 23:47:10.227273   20297 out.go:309] Setting ErrFile to fd 2...
	I0906 23:47:10.227280   20297 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 23:47:10.227477   20297 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6521/.minikube/bin
	I0906 23:47:10.228008   20297 out.go:303] Setting JSON to false
	I0906 23:47:10.228956   20297 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":1777,"bootTime":1694042254,"procs":275,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 23:47:10.229019   20297 start.go:138] virtualization: kvm guest
	I0906 23:47:10.231160   20297 out.go:177] * [functional-369762] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0906 23:47:10.232584   20297 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 23:47:10.233940   20297 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 23:47:10.232653   20297 notify.go:220] Checking for updates...
	I0906 23:47:10.236320   20297 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17174-6521/kubeconfig
	I0906 23:47:10.237647   20297 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6521/.minikube
	I0906 23:47:10.238878   20297 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 23:47:10.240130   20297 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 23:47:10.242615   20297 config.go:182] Loaded profile config "functional-369762": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
	I0906 23:47:10.243600   20297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0906 23:47:10.243712   20297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:10.257995   20297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44313
	I0906 23:47:10.258459   20297 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:10.259037   20297 main.go:141] libmachine: Using API Version  1
	I0906 23:47:10.259066   20297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:10.259448   20297 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:10.259654   20297 main.go:141] libmachine: (functional-369762) Calling .DriverName
	I0906 23:47:10.259923   20297 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 23:47:10.260383   20297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0906 23:47:10.260434   20297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:10.274252   20297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43875
	I0906 23:47:10.274559   20297 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:10.275024   20297 main.go:141] libmachine: Using API Version  1
	I0906 23:47:10.275048   20297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:10.275307   20297 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:10.275475   20297 main.go:141] libmachine: (functional-369762) Calling .DriverName
	I0906 23:47:10.305692   20297 out.go:177] * Using the kvm2 driver based on existing profile
	I0906 23:47:10.307075   20297 start.go:298] selected driver: kvm2
	I0906 23:47:10.307090   20297 start.go:902] validating driver "kvm2" against &{Name:functional-369762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:functional-369762 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.139 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 23:47:10.307216   20297 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 23:47:10.309481   20297 out.go:177] 
	W0906 23:47:10.310818   20297 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0906 23:47:10.312060   20297 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-369762 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-369762 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-369762 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (127.384991ms)

                                                
                                                
-- stdout --
	* [functional-369762] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17174
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17174-6521/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6521/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 23:47:10.478113   20351 out.go:296] Setting OutFile to fd 1 ...
	I0906 23:47:10.478246   20351 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 23:47:10.478256   20351 out.go:309] Setting ErrFile to fd 2...
	I0906 23:47:10.478263   20351 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 23:47:10.478520   20351 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6521/.minikube/bin
	I0906 23:47:10.479027   20351 out.go:303] Setting JSON to false
	I0906 23:47:10.479955   20351 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":1777,"bootTime":1694042254,"procs":279,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 23:47:10.480006   20351 start.go:138] virtualization: kvm guest
	I0906 23:47:10.482204   20351 out.go:177] * [functional-369762] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	I0906 23:47:10.483598   20351 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 23:47:10.483598   20351 notify.go:220] Checking for updates...
	I0906 23:47:10.484924   20351 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 23:47:10.486337   20351 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17174-6521/kubeconfig
	I0906 23:47:10.487672   20351 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6521/.minikube
	I0906 23:47:10.489001   20351 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 23:47:10.491789   20351 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 23:47:10.493303   20351 config.go:182] Loaded profile config "functional-369762": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
	I0906 23:47:10.493676   20351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0906 23:47:10.493721   20351 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:10.507929   20351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34369
	I0906 23:47:10.508310   20351 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:10.508854   20351 main.go:141] libmachine: Using API Version  1
	I0906 23:47:10.508877   20351 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:10.509268   20351 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:10.509452   20351 main.go:141] libmachine: (functional-369762) Calling .DriverName
	I0906 23:47:10.509672   20351 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 23:47:10.509928   20351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0906 23:47:10.509961   20351 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:10.523590   20351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45015
	I0906 23:47:10.524062   20351 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:10.524548   20351 main.go:141] libmachine: Using API Version  1
	I0906 23:47:10.524563   20351 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:10.524871   20351 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:10.525038   20351 main.go:141] libmachine: (functional-369762) Calling .DriverName
	I0906 23:47:10.556148   20351 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0906 23:47:10.557575   20351 start.go:298] selected driver: kvm2
	I0906 23:47:10.557587   20351 start.go:902] validating driver "kvm2" against &{Name:functional-369762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:functional-369762 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.139 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 23:47:10.557710   20351 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 23:47:10.559696   20351 out.go:177] 
	W0906 23:47:10.561089   20351 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0906 23:47:10.562403   20351 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-369762 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-369762 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-lg2hf" [107f3481-72c9-4e86-8d4b-57553f858ac2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-lg2hf" [107f3481-72c9-4e86-8d4b-57553f858ac2] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.019314891s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.50.139:32365
functional_test.go:1674: http://192.168.50.139:32365: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-lg2hf

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.139:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.139:32365
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.48s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (48.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [91b21e4a-ee90-485a-8c12-fdff86987655] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.014460093s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-369762 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-369762 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-369762 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-369762 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-369762 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [027199ff-0deb-4ec5-acf2-eb9111af3a95] Pending
helpers_test.go:344: "sp-pod" [027199ff-0deb-4ec5-acf2-eb9111af3a95] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [027199ff-0deb-4ec5-acf2-eb9111af3a95] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.021689507s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-369762 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-369762 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-369762 delete -f testdata/storage-provisioner/pod.yaml: (1.414012841s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-369762 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [18fa0bbb-8ee1-4b50-9a13-47ebec973277] Pending
helpers_test.go:344: "sp-pod" [18fa0bbb-8ee1-4b50-9a13-47ebec973277] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [18fa0bbb-8ee1-4b50-9a13-47ebec973277] Running
2023/09/06 23:47:42 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 24.017292134s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-369762 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (48.92s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 ssh -n functional-369762 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 cp functional-369762:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd221042917/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 ssh -n functional-369762 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-369762 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-2k6bc" [986f304e-27f0-4d90-b79b-bbf0e1fe0b64] Pending
helpers_test.go:344: "mysql-859648c796-2k6bc" [986f304e-27f0-4d90-b79b-bbf0e1fe0b64] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-2k6bc" [986f304e-27f0-4d90-b79b-bbf0e1fe0b64] Running
E0906 23:47:30.131210   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/client.crt: no such file or directory
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.025650064s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-369762 exec mysql-859648c796-2k6bc -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-369762 exec mysql-859648c796-2k6bc -- mysql -ppassword -e "show databases;": exit status 1 (252.75548ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-369762 exec mysql-859648c796-2k6bc -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-369762 exec mysql-859648c796-2k6bc -- mysql -ppassword -e "show databases;": exit status 1 (226.07073ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-369762 exec mysql-859648c796-2k6bc -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.02s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/13704/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 ssh "sudo cat /etc/test/nested/copy/13704/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/13704.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 ssh "sudo cat /etc/ssl/certs/13704.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/13704.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 ssh "sudo cat /usr/share/ca-certificates/13704.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/137042.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 ssh "sudo cat /etc/ssl/certs/137042.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/137042.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 ssh "sudo cat /usr/share/ca-certificates/137042.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-369762 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-369762 ssh "sudo systemctl is-active docker": exit status 1 (204.277566ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-369762 ssh "sudo systemctl is-active crio": exit status 1 (223.545777ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-369762 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-369762 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-p7b2w" [0688fa09-d3de-4a0a-a5dd-1b9a8e99a3e3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-p7b2w" [0688fa09-d3de-4a0a-a5dd-1b9a8e99a3e3] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.028463614s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.22s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-369762 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.1
registry.k8s.io/kube-proxy:v1.28.1
registry.k8s.io/kube-controller-manager:v1.28.1
registry.k8s.io/kube-apiserver:v1.28.1
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-369762
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-369762
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-369762 image ls --format short --alsologtostderr:
I0906 23:47:19.388495   20809 out.go:296] Setting OutFile to fd 1 ...
I0906 23:47:19.388633   20809 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 23:47:19.388646   20809 out.go:309] Setting ErrFile to fd 2...
I0906 23:47:19.388653   20809 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 23:47:19.388858   20809 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6521/.minikube/bin
I0906 23:47:19.389435   20809 config.go:182] Loaded profile config "functional-369762": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
I0906 23:47:19.389530   20809 config.go:182] Loaded profile config "functional-369762": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
I0906 23:47:19.389859   20809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0906 23:47:19.389908   20809 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 23:47:19.404050   20809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46209
I0906 23:47:19.404446   20809 main.go:141] libmachine: () Calling .GetVersion
I0906 23:47:19.405160   20809 main.go:141] libmachine: Using API Version  1
I0906 23:47:19.405188   20809 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 23:47:19.405517   20809 main.go:141] libmachine: () Calling .GetMachineName
I0906 23:47:19.405674   20809 main.go:141] libmachine: (functional-369762) Calling .GetState
I0906 23:47:19.407541   20809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0906 23:47:19.407594   20809 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 23:47:19.422197   20809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41689
I0906 23:47:19.422624   20809 main.go:141] libmachine: () Calling .GetVersion
I0906 23:47:19.423119   20809 main.go:141] libmachine: Using API Version  1
I0906 23:47:19.423152   20809 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 23:47:19.423455   20809 main.go:141] libmachine: () Calling .GetMachineName
I0906 23:47:19.423653   20809 main.go:141] libmachine: (functional-369762) Calling .DriverName
I0906 23:47:19.423861   20809 ssh_runner.go:195] Run: systemctl --version
I0906 23:47:19.423891   20809 main.go:141] libmachine: (functional-369762) Calling .GetSSHHostname
I0906 23:47:19.426785   20809 main.go:141] libmachine: (functional-369762) DBG | domain functional-369762 has defined MAC address 52:54:00:8f:49:5d in network mk-functional-369762
I0906 23:47:19.427169   20809 main.go:141] libmachine: (functional-369762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:49:5d", ip: ""} in network mk-functional-369762: {Iface:virbr1 ExpiryTime:2023-09-07 00:45:08 +0000 UTC Type:0 Mac:52:54:00:8f:49:5d Iaid: IPaddr:192.168.50.139 Prefix:24 Hostname:functional-369762 Clientid:01:52:54:00:8f:49:5d}
I0906 23:47:19.427198   20809 main.go:141] libmachine: (functional-369762) DBG | domain functional-369762 has defined IP address 192.168.50.139 and MAC address 52:54:00:8f:49:5d in network mk-functional-369762
I0906 23:47:19.427343   20809 main.go:141] libmachine: (functional-369762) Calling .GetSSHPort
I0906 23:47:19.427527   20809 main.go:141] libmachine: (functional-369762) Calling .GetSSHKeyPath
I0906 23:47:19.427701   20809 main.go:141] libmachine: (functional-369762) Calling .GetSSHUsername
I0906 23:47:19.427860   20809 sshutil.go:53] new ssh client: &{IP:192.168.50.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6521/.minikube/machines/functional-369762/id_rsa Username:docker}
I0906 23:47:19.508169   20809 ssh_runner.go:195] Run: sudo crictl images --output json
I0906 23:47:19.553394   20809 main.go:141] libmachine: Making call to close driver server
I0906 23:47:19.553410   20809 main.go:141] libmachine: (functional-369762) Calling .Close
I0906 23:47:19.553665   20809 main.go:141] libmachine: Successfully made call to close driver server
I0906 23:47:19.553698   20809 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 23:47:19.553704   20809 main.go:141] libmachine: (functional-369762) DBG | Closing plugin on server side
I0906 23:47:19.553707   20809 main.go:141] libmachine: Making call to close driver server
I0906 23:47:19.553732   20809 main.go:141] libmachine: (functional-369762) Calling .Close
I0906 23:47:19.553956   20809 main.go:141] libmachine: Successfully made call to close driver server
I0906 23:47:19.553972   20809 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-369762 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:73deb9 | 103MB  |
| registry.k8s.io/kube-scheduler              | v1.28.1            | sha256:b462ce | 18.8MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
| docker.io/library/minikube-local-cache-test | functional-369762  | sha256:11560d | 1.01kB |
| gcr.io/google-containers/addon-resizer      | functional-369762  | sha256:ffd4cf | 10.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| registry.k8s.io/kube-apiserver              | v1.28.1            | sha256:5c8012 | 34.6MB |
| registry.k8s.io/kube-proxy                  | v1.28.1            | sha256:6cdbab | 24.6MB |
| docker.io/kindest/kindnetd                  | v20230511-dc714da8 | sha256:b0b1fa | 27.7MB |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| docker.io/library/nginx                     | latest             | sha256:eea7b3 | 70.5MB |
| localhost/my-image                          | functional-369762  | sha256:33594b | 775kB  |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:ead0a4 | 16.2MB |
| registry.k8s.io/kube-controller-manager     | v1.28.1            | sha256:821b3d | 33.4MB |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-369762 image ls --format table --alsologtostderr:
I0906 23:47:25.295443   20976 out.go:296] Setting OutFile to fd 1 ...
I0906 23:47:25.295555   20976 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 23:47:25.295564   20976 out.go:309] Setting ErrFile to fd 2...
I0906 23:47:25.295568   20976 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 23:47:25.295757   20976 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6521/.minikube/bin
I0906 23:47:25.296344   20976 config.go:182] Loaded profile config "functional-369762": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
I0906 23:47:25.296436   20976 config.go:182] Loaded profile config "functional-369762": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
I0906 23:47:25.296754   20976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0906 23:47:25.296811   20976 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 23:47:25.310831   20976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37063
I0906 23:47:25.311270   20976 main.go:141] libmachine: () Calling .GetVersion
I0906 23:47:25.311916   20976 main.go:141] libmachine: Using API Version  1
I0906 23:47:25.311940   20976 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 23:47:25.312271   20976 main.go:141] libmachine: () Calling .GetMachineName
I0906 23:47:25.312434   20976 main.go:141] libmachine: (functional-369762) Calling .GetState
I0906 23:47:25.314267   20976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0906 23:47:25.314306   20976 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 23:47:25.328536   20976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43373
I0906 23:47:25.328919   20976 main.go:141] libmachine: () Calling .GetVersion
I0906 23:47:25.329415   20976 main.go:141] libmachine: Using API Version  1
I0906 23:47:25.329444   20976 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 23:47:25.329801   20976 main.go:141] libmachine: () Calling .GetMachineName
I0906 23:47:25.330011   20976 main.go:141] libmachine: (functional-369762) Calling .DriverName
I0906 23:47:25.330191   20976 ssh_runner.go:195] Run: systemctl --version
I0906 23:47:25.330227   20976 main.go:141] libmachine: (functional-369762) Calling .GetSSHHostname
I0906 23:47:25.332731   20976 main.go:141] libmachine: (functional-369762) DBG | domain functional-369762 has defined MAC address 52:54:00:8f:49:5d in network mk-functional-369762
I0906 23:47:25.333107   20976 main.go:141] libmachine: (functional-369762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:49:5d", ip: ""} in network mk-functional-369762: {Iface:virbr1 ExpiryTime:2023-09-07 00:45:08 +0000 UTC Type:0 Mac:52:54:00:8f:49:5d Iaid: IPaddr:192.168.50.139 Prefix:24 Hostname:functional-369762 Clientid:01:52:54:00:8f:49:5d}
I0906 23:47:25.333132   20976 main.go:141] libmachine: (functional-369762) DBG | domain functional-369762 has defined IP address 192.168.50.139 and MAC address 52:54:00:8f:49:5d in network mk-functional-369762
I0906 23:47:25.333301   20976 main.go:141] libmachine: (functional-369762) Calling .GetSSHPort
I0906 23:47:25.333502   20976 main.go:141] libmachine: (functional-369762) Calling .GetSSHKeyPath
I0906 23:47:25.333657   20976 main.go:141] libmachine: (functional-369762) Calling .GetSSHUsername
I0906 23:47:25.333770   20976 sshutil.go:53] new ssh client: &{IP:192.168.50.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6521/.minikube/machines/functional-369762/id_rsa Username:docker}
I0906 23:47:25.420781   20976 ssh_runner.go:195] Run: sudo crictl images --output json
I0906 23:47:25.466163   20976 main.go:141] libmachine: Making call to close driver server
I0906 23:47:25.466182   20976 main.go:141] libmachine: (functional-369762) Calling .Close
I0906 23:47:25.466417   20976 main.go:141] libmachine: Successfully made call to close driver server
I0906 23:47:25.466444   20976 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 23:47:25.466457   20976 main.go:141] libmachine: Making call to close driver server
I0906 23:47:25.466466   20976 main.go:141] libmachine: (functional-369762) Calling .Close
I0906 23:47:25.466754   20976 main.go:141] libmachine: (functional-369762) DBG | Closing plugin on server side
I0906 23:47:25.466780   20976 main.go:141] libmachine: Successfully made call to close driver server
I0906 23:47:25.466809   20976 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-369762 image ls --format json --alsologtostderr:
[{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-369762"],"size":"10823156"},{"id":"sha256:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.1"],"size":"33396106"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha25
6:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77","repoDigests":["registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.1"],"size":"34617463"},{"id":"sha256:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5","repoDigests":["registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.1"],"size":"24555014"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da","repoDigests":["docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"
],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"27731571"},{"id":"sha256:eea7b3dcba7ee47c0d16a60cc85d2b977d166be3960541991f3e6294d795ed24","repoDigests":["docker.io/library/nginx@sha256:104c7c5c54f2685f0f46f3be607ce60da7085da3eaa5ad22d3d9f01594295e9c"],"repoTags":["docker.io/library/nginx:latest"],"size":"70479485"},{"id":"sha256:33594b8aefc443e74664ad847b9f39ff5b797b8089c3e7b3b298593a0f9caee7","repoDigests":[],"repoTags":["localhost/my-image:functional-369762"],"size":"775189"},{"id":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"16190758"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:11560d84c32010c17275f84ee0338d6b1e9f5e990c56f358acdaeb9b7c217f34","repoD
igests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-369762"],"size":"1007"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a","repoDigests":["registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.1"],"size":"18802390"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDige
sts":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"102894559"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-369762 image ls --format json --alsologtostderr:
I0906 23:47:25.069968   20953 out.go:296] Setting OutFile to fd 1 ...
I0906 23:47:25.070087   20953 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 23:47:25.070096   20953 out.go:309] Setting ErrFile to fd 2...
I0906 23:47:25.070101   20953 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 23:47:25.070289   20953 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6521/.minikube/bin
I0906 23:47:25.070818   20953 config.go:182] Loaded profile config "functional-369762": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
I0906 23:47:25.070902   20953 config.go:182] Loaded profile config "functional-369762": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
I0906 23:47:25.071224   20953 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0906 23:47:25.071261   20953 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 23:47:25.086856   20953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44549
I0906 23:47:25.087289   20953 main.go:141] libmachine: () Calling .GetVersion
I0906 23:47:25.087840   20953 main.go:141] libmachine: Using API Version  1
I0906 23:47:25.087861   20953 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 23:47:25.088243   20953 main.go:141] libmachine: () Calling .GetMachineName
I0906 23:47:25.088399   20953 main.go:141] libmachine: (functional-369762) Calling .GetState
I0906 23:47:25.090366   20953 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0906 23:47:25.090437   20953 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 23:47:25.104147   20953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37531
I0906 23:47:25.104597   20953 main.go:141] libmachine: () Calling .GetVersion
I0906 23:47:25.105083   20953 main.go:141] libmachine: Using API Version  1
I0906 23:47:25.105112   20953 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 23:47:25.105444   20953 main.go:141] libmachine: () Calling .GetMachineName
I0906 23:47:25.105634   20953 main.go:141] libmachine: (functional-369762) Calling .DriverName
I0906 23:47:25.105801   20953 ssh_runner.go:195] Run: systemctl --version
I0906 23:47:25.105832   20953 main.go:141] libmachine: (functional-369762) Calling .GetSSHHostname
I0906 23:47:25.108682   20953 main.go:141] libmachine: (functional-369762) DBG | domain functional-369762 has defined MAC address 52:54:00:8f:49:5d in network mk-functional-369762
I0906 23:47:25.109070   20953 main.go:141] libmachine: (functional-369762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:49:5d", ip: ""} in network mk-functional-369762: {Iface:virbr1 ExpiryTime:2023-09-07 00:45:08 +0000 UTC Type:0 Mac:52:54:00:8f:49:5d Iaid: IPaddr:192.168.50.139 Prefix:24 Hostname:functional-369762 Clientid:01:52:54:00:8f:49:5d}
I0906 23:47:25.109098   20953 main.go:141] libmachine: (functional-369762) DBG | domain functional-369762 has defined IP address 192.168.50.139 and MAC address 52:54:00:8f:49:5d in network mk-functional-369762
I0906 23:47:25.109251   20953 main.go:141] libmachine: (functional-369762) Calling .GetSSHPort
I0906 23:47:25.109423   20953 main.go:141] libmachine: (functional-369762) Calling .GetSSHKeyPath
I0906 23:47:25.109595   20953 main.go:141] libmachine: (functional-369762) Calling .GetSSHUsername
I0906 23:47:25.109749   20953 sshutil.go:53] new ssh client: &{IP:192.168.50.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6521/.minikube/machines/functional-369762/id_rsa Username:docker}
I0906 23:47:25.193845   20953 ssh_runner.go:195] Run: sudo crictl images --output json
I0906 23:47:25.241731   20953 main.go:141] libmachine: Making call to close driver server
I0906 23:47:25.241749   20953 main.go:141] libmachine: (functional-369762) Calling .Close
I0906 23:47:25.242016   20953 main.go:141] libmachine: (functional-369762) DBG | Closing plugin on server side
I0906 23:47:25.242108   20953 main.go:141] libmachine: Successfully made call to close driver server
I0906 23:47:25.242144   20953 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 23:47:25.242163   20953 main.go:141] libmachine: Making call to close driver server
I0906 23:47:25.242176   20953 main.go:141] libmachine: (functional-369762) Calling .Close
I0906 23:47:25.242406   20953 main.go:141] libmachine: Successfully made call to close driver server
I0906 23:47:25.242785   20953 main.go:141] libmachine: (functional-369762) DBG | Closing plugin on server side
I0906 23:47:25.242928   20953 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-369762 image ls --format yaml --alsologtostderr:
- id: sha256:11560d84c32010c17275f84ee0338d6b1e9f5e990c56f358acdaeb9b7c217f34
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-369762
size: "1007"
- id: sha256:eea7b3dcba7ee47c0d16a60cc85d2b977d166be3960541991f3e6294d795ed24
repoDigests:
- docker.io/library/nginx@sha256:104c7c5c54f2685f0f46f3be607ce60da7085da3eaa5ad22d3d9f01594295e9c
repoTags:
- docker.io/library/nginx:latest
size: "70479485"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.1
size: "34617463"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da
repoDigests:
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "27731571"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-369762
size: "10823156"
- id: sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "16190758"
- id: sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "102894559"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.1
size: "33396106"
- id: sha256:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c
repoTags:
- registry.k8s.io/kube-proxy:v1.28.1
size: "24555014"
- id: sha256:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.1
size: "18802390"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-369762 image ls --format yaml --alsologtostderr:
I0906 23:47:19.596552   20832 out.go:296] Setting OutFile to fd 1 ...
I0906 23:47:19.596667   20832 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 23:47:19.596676   20832 out.go:309] Setting ErrFile to fd 2...
I0906 23:47:19.596681   20832 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 23:47:19.596875   20832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6521/.minikube/bin
I0906 23:47:19.597410   20832 config.go:182] Loaded profile config "functional-369762": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
I0906 23:47:19.597495   20832 config.go:182] Loaded profile config "functional-369762": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
I0906 23:47:19.597833   20832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0906 23:47:19.597876   20832 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 23:47:19.612144   20832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40433
I0906 23:47:19.612551   20832 main.go:141] libmachine: () Calling .GetVersion
I0906 23:47:19.613161   20832 main.go:141] libmachine: Using API Version  1
I0906 23:47:19.613190   20832 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 23:47:19.613526   20832 main.go:141] libmachine: () Calling .GetMachineName
I0906 23:47:19.613733   20832 main.go:141] libmachine: (functional-369762) Calling .GetState
I0906 23:47:19.615570   20832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0906 23:47:19.615613   20832 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 23:47:19.630196   20832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44521
I0906 23:47:19.630574   20832 main.go:141] libmachine: () Calling .GetVersion
I0906 23:47:19.631006   20832 main.go:141] libmachine: Using API Version  1
I0906 23:47:19.631026   20832 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 23:47:19.631391   20832 main.go:141] libmachine: () Calling .GetMachineName
I0906 23:47:19.631586   20832 main.go:141] libmachine: (functional-369762) Calling .DriverName
I0906 23:47:19.631784   20832 ssh_runner.go:195] Run: systemctl --version
I0906 23:47:19.631807   20832 main.go:141] libmachine: (functional-369762) Calling .GetSSHHostname
I0906 23:47:19.634632   20832 main.go:141] libmachine: (functional-369762) DBG | domain functional-369762 has defined MAC address 52:54:00:8f:49:5d in network mk-functional-369762
I0906 23:47:19.635018   20832 main.go:141] libmachine: (functional-369762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:49:5d", ip: ""} in network mk-functional-369762: {Iface:virbr1 ExpiryTime:2023-09-07 00:45:08 +0000 UTC Type:0 Mac:52:54:00:8f:49:5d Iaid: IPaddr:192.168.50.139 Prefix:24 Hostname:functional-369762 Clientid:01:52:54:00:8f:49:5d}
I0906 23:47:19.635046   20832 main.go:141] libmachine: (functional-369762) DBG | domain functional-369762 has defined IP address 192.168.50.139 and MAC address 52:54:00:8f:49:5d in network mk-functional-369762
I0906 23:47:19.635135   20832 main.go:141] libmachine: (functional-369762) Calling .GetSSHPort
I0906 23:47:19.635319   20832 main.go:141] libmachine: (functional-369762) Calling .GetSSHKeyPath
I0906 23:47:19.635485   20832 main.go:141] libmachine: (functional-369762) Calling .GetSSHUsername
I0906 23:47:19.635637   20832 sshutil.go:53] new ssh client: &{IP:192.168.50.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6521/.minikube/machines/functional-369762/id_rsa Username:docker}
I0906 23:47:19.724969   20832 ssh_runner.go:195] Run: sudo crictl images --output json
I0906 23:47:19.757674   20832 main.go:141] libmachine: Making call to close driver server
I0906 23:47:19.757688   20832 main.go:141] libmachine: (functional-369762) Calling .Close
I0906 23:47:19.757967   20832 main.go:141] libmachine: Successfully made call to close driver server
I0906 23:47:19.757989   20832 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 23:47:19.758000   20832 main.go:141] libmachine: Making call to close driver server
I0906 23:47:19.758010   20832 main.go:141] libmachine: (functional-369762) Calling .Close
I0906 23:47:19.758245   20832 main.go:141] libmachine: Successfully made call to close driver server
I0906 23:47:19.758266   20832 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-369762 ssh pgrep buildkitd: exit status 1 (224.576923ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 image build -t localhost/my-image:functional-369762 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-369762 image build -t localhost/my-image:functional-369762 testdata/build --alsologtostderr: (4.825087411s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-369762 image build -t localhost/my-image:functional-369762 testdata/build --alsologtostderr:
I0906 23:47:20.040349   20895 out.go:296] Setting OutFile to fd 1 ...
I0906 23:47:20.040543   20895 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 23:47:20.040556   20895 out.go:309] Setting ErrFile to fd 2...
I0906 23:47:20.040563   20895 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 23:47:20.040896   20895 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6521/.minikube/bin
I0906 23:47:20.041664   20895 config.go:182] Loaded profile config "functional-369762": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
I0906 23:47:20.042187   20895 config.go:182] Loaded profile config "functional-369762": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
I0906 23:47:20.042726   20895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0906 23:47:20.042777   20895 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 23:47:20.059008   20895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35277
I0906 23:47:20.059623   20895 main.go:141] libmachine: () Calling .GetVersion
I0906 23:47:20.060266   20895 main.go:141] libmachine: Using API Version  1
I0906 23:47:20.060298   20895 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 23:47:20.060696   20895 main.go:141] libmachine: () Calling .GetMachineName
I0906 23:47:20.060871   20895 main.go:141] libmachine: (functional-369762) Calling .GetState
I0906 23:47:20.062720   20895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0906 23:47:20.062762   20895 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 23:47:20.077266   20895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34097
I0906 23:47:20.077632   20895 main.go:141] libmachine: () Calling .GetVersion
I0906 23:47:20.078067   20895 main.go:141] libmachine: Using API Version  1
I0906 23:47:20.078094   20895 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 23:47:20.078419   20895 main.go:141] libmachine: () Calling .GetMachineName
I0906 23:47:20.078633   20895 main.go:141] libmachine: (functional-369762) Calling .DriverName
I0906 23:47:20.078834   20895 ssh_runner.go:195] Run: systemctl --version
I0906 23:47:20.078866   20895 main.go:141] libmachine: (functional-369762) Calling .GetSSHHostname
I0906 23:47:20.081487   20895 main.go:141] libmachine: (functional-369762) DBG | domain functional-369762 has defined MAC address 52:54:00:8f:49:5d in network mk-functional-369762
I0906 23:47:20.081919   20895 main.go:141] libmachine: (functional-369762) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:49:5d", ip: ""} in network mk-functional-369762: {Iface:virbr1 ExpiryTime:2023-09-07 00:45:08 +0000 UTC Type:0 Mac:52:54:00:8f:49:5d Iaid: IPaddr:192.168.50.139 Prefix:24 Hostname:functional-369762 Clientid:01:52:54:00:8f:49:5d}
I0906 23:47:20.081947   20895 main.go:141] libmachine: (functional-369762) DBG | domain functional-369762 has defined IP address 192.168.50.139 and MAC address 52:54:00:8f:49:5d in network mk-functional-369762
I0906 23:47:20.082111   20895 main.go:141] libmachine: (functional-369762) Calling .GetSSHPort
I0906 23:47:20.082289   20895 main.go:141] libmachine: (functional-369762) Calling .GetSSHKeyPath
I0906 23:47:20.082457   20895 main.go:141] libmachine: (functional-369762) Calling .GetSSHUsername
I0906 23:47:20.082598   20895 sshutil.go:53] new ssh client: &{IP:192.168.50.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6521/.minikube/machines/functional-369762/id_rsa Username:docker}
I0906 23:47:20.212265   20895 build_images.go:151] Building image from path: /tmp/build.445894113.tar
I0906 23:47:20.212341   20895 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0906 23:47:20.221877   20895 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.445894113.tar
I0906 23:47:20.228712   20895 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.445894113.tar: stat -c "%s %y" /var/lib/minikube/build/build.445894113.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.445894113.tar': No such file or directory
I0906 23:47:20.228744   20895 ssh_runner.go:362] scp /tmp/build.445894113.tar --> /var/lib/minikube/build/build.445894113.tar (3072 bytes)
I0906 23:47:20.267823   20895 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.445894113
I0906 23:47:20.282150   20895 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.445894113 -xf /var/lib/minikube/build/build.445894113.tar
I0906 23:47:20.298586   20895 containerd.go:378] Building image: /var/lib/minikube/build/build.445894113
I0906 23:47:20.298667   20895 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.445894113 --local dockerfile=/var/lib/minikube/build/build.445894113 --output type=image,name=localhost/my-image:functional-369762
#1 [internal] load .dockerignore
#1 transferring context: 2B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 97B done
#2 DONE 0.1s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 1.4s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.7s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.9s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:53c3a41d534374bcd55d2ad4fe60ba1120fc8d9b6e3b3fa7a299c4d2f07f878e 0.0s done
#8 exporting config sha256:33594b8aefc443e74664ad847b9f39ff5b797b8089c3e7b3b298593a0f9caee7 0.0s done
#8 naming to localhost/my-image:functional-369762 done
#8 DONE 0.2s
I0906 23:47:24.782275   20895 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.445894113 --local dockerfile=/var/lib/minikube/build/build.445894113 --output type=image,name=localhost/my-image:functional-369762: (4.483574887s)
I0906 23:47:24.782349   20895 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.445894113
I0906 23:47:24.796905   20895 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.445894113.tar
I0906 23:47:24.808181   20895 build_images.go:207] Built localhost/my-image:functional-369762 from /tmp/build.445894113.tar
I0906 23:47:24.808211   20895 build_images.go:123] succeeded building to: functional-369762
I0906 23:47:24.808216   20895 build_images.go:124] failed building to: 
I0906 23:47:24.808243   20895 main.go:141] libmachine: Making call to close driver server
I0906 23:47:24.808260   20895 main.go:141] libmachine: (functional-369762) Calling .Close
I0906 23:47:24.808538   20895 main.go:141] libmachine: Successfully made call to close driver server
I0906 23:47:24.808561   20895 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 23:47:24.808570   20895 main.go:141] libmachine: Making call to close driver server
I0906 23:47:24.808579   20895 main.go:141] libmachine: (functional-369762) Calling .Close
I0906 23:47:24.808579   20895 main.go:141] libmachine: (functional-369762) DBG | Closing plugin on server side
I0906 23:47:24.808792   20895 main.go:141] libmachine: Successfully made call to close driver server
I0906 23:47:24.808815   20895 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 23:47:24.808817   20895 main.go:141] libmachine: (functional-369762) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.081318405s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-369762
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "234.459469ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "55.254086ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "274.358869ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "47.733723ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-369762 /tmp/TestFunctionalparallelMountCmdany-port2136861117/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1694044015364633471" to /tmp/TestFunctionalparallelMountCmdany-port2136861117/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1694044015364633471" to /tmp/TestFunctionalparallelMountCmdany-port2136861117/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1694044015364633471" to /tmp/TestFunctionalparallelMountCmdany-port2136861117/001/test-1694044015364633471
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-369762 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (202.333365ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  6 23:46 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  6 23:46 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  6 23:46 test-1694044015364633471
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 ssh cat /mount-9p/test-1694044015364633471
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-369762 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d665e978-0f3e-44a8-95f6-6e0e52cf155e] Pending
helpers_test.go:344: "busybox-mount" [d665e978-0f3e-44a8-95f6-6e0e52cf155e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [d665e978-0f3e-44a8-95f6-6e0e52cf155e] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d665e978-0f3e-44a8-95f6-6e0e52cf155e] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.049041589s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-369762 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-369762 /tmp/TestFunctionalparallelMountCmdany-port2136861117/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 image load --daemon gcr.io/google-containers/addon-resizer:functional-369762 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-369762 image load --daemon gcr.io/google-containers/addon-resizer:functional-369762 --alsologtostderr: (3.749339415s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 image load --daemon gcr.io/google-containers/addon-resizer:functional-369762 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-369762 image load --daemon gcr.io/google-containers/addon-resizer:functional-369762 --alsologtostderr: (4.158201268s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 service list -o json
functional_test.go:1493: Took "297.672382ms" to run "out/minikube-linux-amd64 -p functional-369762 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.50.139:32152
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.127746177s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-369762
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 image load --daemon gcr.io/google-containers/addon-resizer:functional-369762 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-369762 image load --daemon gcr.io/google-containers/addon-resizer:functional-369762 --alsologtostderr: (4.562167257s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.98s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.50.139:32152
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-369762 /tmp/TestFunctionalparallelMountCmdspecific-port3018326884/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-369762 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (188.937285ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-369762 /tmp/TestFunctionalparallelMountCmdspecific-port3018326884/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-369762 ssh "sudo umount -f /mount-9p": exit status 1 (238.394028ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-369762 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-369762 /tmp/TestFunctionalparallelMountCmdspecific-port3018326884/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-369762 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2121768456/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-369762 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2121768456/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-369762 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2121768456/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-369762 ssh "findmnt -T" /mount1: exit status 1 (250.921301ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-369762 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-369762 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2121768456/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-369762 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2121768456/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-369762 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2121768456/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 image save gcr.io/google-containers/addon-resizer:functional-369762 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-369762 image save gcr.io/google-containers/addon-resizer:functional-369762 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.102696952s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 image rm gcr.io/google-containers/addon-resizer:functional-369762 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-369762 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (2.717868403s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-369762
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-369762 image save --daemon gcr.io/google-containers/addon-resizer:functional-369762 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-369762 image save --daemon gcr.io/google-containers/addon-resizer:functional-369762 --alsologtostderr: (1.830486119s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-369762
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.86s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-369762
--- PASS: TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-369762
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-369762
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (86.76s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-757297 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E0906 23:48:52.052128   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-757297 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m26.76376717s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (86.76s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.44s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-757297 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-757297 addons enable ingress --alsologtostderr -v=5: (11.444273251s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.44s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.59s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-757297 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.59s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (34.87s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-757297 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-757297 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.964220179s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-757297 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-757297 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [54b62293-c1ec-4aec-b18e-3d9aa4e6880f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [54b62293-c1ec-4aec-b18e-3d9aa4e6880f] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.012211117s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-757297 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-757297 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-757297 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.39.8
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-757297 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-757297 addons disable ingress-dns --alsologtostderr -v=1: (2.208595509s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-757297 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-757297 addons disable ingress --alsologtostderr -v=1: (7.497498227s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (34.87s)

                                                
                                    
x
+
TestJSONOutput/start/Command (63.19s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-308103 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-308103 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m3.187478802s)
--- PASS: TestJSONOutput/start/Command (63.19s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-308103 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-308103 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.08s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-308103 --output=json --user=testUser
E0906 23:51:08.208719   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-308103 --output=json --user=testUser: (7.081367719s)
--- PASS: TestJSONOutput/stop/Command (7.08s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.17s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-067335 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-067335 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (59.395711ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6d8a5853-c38d-497f-93ba-3e55f991f332","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-067335] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9ff582e6-51df-4deb-b22c-02254ca70bff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17174"}}
	{"specversion":"1.0","id":"ce49510a-1803-4618-9ffd-1231cccab52e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f0ff2b15-c444-4255-9cc0-f872a1c0a449","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17174-6521/kubeconfig"}}
	{"specversion":"1.0","id":"a0518608-6cf2-49e9-bc7f-dcb0c536633b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6521/.minikube"}}
	{"specversion":"1.0","id":"2ee13fec-f823-4f6c-b5da-26d3f8678f4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"7abcd72f-8755-4f6c-96aa-cf94f4c8d639","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"18b52920-46fe-4612-9ab2-4928bfc711db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-067335" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-067335
--- PASS: TestErrorJSONOutput (0.17s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (100.02s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-056469 --driver=kvm2  --container-runtime=containerd
E0906 23:51:35.893458   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/client.crt: no such file or directory
E0906 23:51:53.832014   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/functional-369762/client.crt: no such file or directory
E0906 23:51:53.837291   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/functional-369762/client.crt: no such file or directory
E0906 23:51:53.847575   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/functional-369762/client.crt: no such file or directory
E0906 23:51:53.867823   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/functional-369762/client.crt: no such file or directory
E0906 23:51:53.908096   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/functional-369762/client.crt: no such file or directory
E0906 23:51:53.988459   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/functional-369762/client.crt: no such file or directory
E0906 23:51:54.148937   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/functional-369762/client.crt: no such file or directory
E0906 23:51:54.469564   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/functional-369762/client.crt: no such file or directory
E0906 23:51:55.110451   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/functional-369762/client.crt: no such file or directory
E0906 23:51:56.391017   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/functional-369762/client.crt: no such file or directory
E0906 23:51:58.951237   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/functional-369762/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-056469 --driver=kvm2  --container-runtime=containerd: (47.28262638s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-058645 --driver=kvm2  --container-runtime=containerd
E0906 23:52:04.071456   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/functional-369762/client.crt: no such file or directory
E0906 23:52:14.311684   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/functional-369762/client.crt: no such file or directory
E0906 23:52:34.792862   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/functional-369762/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-058645 --driver=kvm2  --container-runtime=containerd: (50.056514272s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-056469
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-058645
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-058645" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-058645
helpers_test.go:175: Cleaning up "first-056469" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-056469
--- PASS: TestMinikubeProfile (100.02s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.87s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-257731 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0906 23:53:15.754129   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/functional-369762/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-257731 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (27.866401625s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-257731 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-257731 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (26.43s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-274445 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-274445 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (25.426512838s)
--- PASS: TestMountStart/serial/StartWithMountSecond (26.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-274445 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-274445 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.84s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-257731 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.84s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-274445 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-274445 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-274445
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-274445: (1.173878821s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.31s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-274445
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-274445: (23.313359577s)
--- PASS: TestMountStart/serial/RestartStopped (24.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-274445 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-274445 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (128.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-367040 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0906 23:54:24.364136   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/ingress-addon-legacy-757297/client.crt: no such file or directory
E0906 23:54:24.369395   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/ingress-addon-legacy-757297/client.crt: no such file or directory
E0906 23:54:24.379634   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/ingress-addon-legacy-757297/client.crt: no such file or directory
E0906 23:54:24.399926   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/ingress-addon-legacy-757297/client.crt: no such file or directory
E0906 23:54:24.440239   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/ingress-addon-legacy-757297/client.crt: no such file or directory
E0906 23:54:24.520577   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/ingress-addon-legacy-757297/client.crt: no such file or directory
E0906 23:54:24.680862   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/ingress-addon-legacy-757297/client.crt: no such file or directory
E0906 23:54:25.001464   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/ingress-addon-legacy-757297/client.crt: no such file or directory
E0906 23:54:25.642458   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/ingress-addon-legacy-757297/client.crt: no such file or directory
E0906 23:54:26.922585   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/ingress-addon-legacy-757297/client.crt: no such file or directory
E0906 23:54:29.482821   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/ingress-addon-legacy-757297/client.crt: no such file or directory
E0906 23:54:34.602973   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/ingress-addon-legacy-757297/client.crt: no such file or directory
E0906 23:54:37.675154   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/functional-369762/client.crt: no such file or directory
E0906 23:54:44.843892   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/ingress-addon-legacy-757297/client.crt: no such file or directory
E0906 23:55:05.325106   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/ingress-addon-legacy-757297/client.crt: no such file or directory
E0906 23:55:46.285699   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/ingress-addon-legacy-757297/client.crt: no such file or directory
E0906 23:56:08.208455   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-367040 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m8.224325231s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (128.62s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-367040 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-367040 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-367040 -- rollout status deployment/busybox: (3.67317243s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-367040 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-367040 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-367040 -- exec busybox-5bc68d56bd-98dwl -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-367040 -- exec busybox-5bc68d56bd-bcpwv -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-367040 -- exec busybox-5bc68d56bd-98dwl -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-367040 -- exec busybox-5bc68d56bd-bcpwv -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-367040 -- exec busybox-5bc68d56bd-98dwl -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-367040 -- exec busybox-5bc68d56bd-bcpwv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.29s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-367040 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-367040 -- exec busybox-5bc68d56bd-98dwl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-367040 -- exec busybox-5bc68d56bd-98dwl -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-367040 -- exec busybox-5bc68d56bd-bcpwv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-367040 -- exec busybox-5bc68d56bd-bcpwv -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-367040 -v 3 --alsologtostderr
E0906 23:56:53.830290   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/functional-369762/client.crt: no such file or directory
E0906 23:57:08.206151   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/ingress-addon-legacy-757297/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-367040 -v 3 --alsologtostderr: (41.612140138s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.18s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 cp testdata/cp-test.txt multinode-367040:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 ssh -n multinode-367040 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 cp multinode-367040:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3723154330/001/cp-test_multinode-367040.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 ssh -n multinode-367040 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 cp multinode-367040:/home/docker/cp-test.txt multinode-367040-m02:/home/docker/cp-test_multinode-367040_multinode-367040-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 ssh -n multinode-367040 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 ssh -n multinode-367040-m02 "sudo cat /home/docker/cp-test_multinode-367040_multinode-367040-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 cp multinode-367040:/home/docker/cp-test.txt multinode-367040-m03:/home/docker/cp-test_multinode-367040_multinode-367040-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 ssh -n multinode-367040 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 ssh -n multinode-367040-m03 "sudo cat /home/docker/cp-test_multinode-367040_multinode-367040-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 cp testdata/cp-test.txt multinode-367040-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 ssh -n multinode-367040-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 cp multinode-367040-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3723154330/001/cp-test_multinode-367040-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 ssh -n multinode-367040-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 cp multinode-367040-m02:/home/docker/cp-test.txt multinode-367040:/home/docker/cp-test_multinode-367040-m02_multinode-367040.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 ssh -n multinode-367040-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 ssh -n multinode-367040 "sudo cat /home/docker/cp-test_multinode-367040-m02_multinode-367040.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 cp multinode-367040-m02:/home/docker/cp-test.txt multinode-367040-m03:/home/docker/cp-test_multinode-367040-m02_multinode-367040-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 ssh -n multinode-367040-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 ssh -n multinode-367040-m03 "sudo cat /home/docker/cp-test_multinode-367040-m02_multinode-367040-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 cp testdata/cp-test.txt multinode-367040-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 ssh -n multinode-367040-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 cp multinode-367040-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3723154330/001/cp-test_multinode-367040-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 ssh -n multinode-367040-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 cp multinode-367040-m03:/home/docker/cp-test.txt multinode-367040:/home/docker/cp-test_multinode-367040-m03_multinode-367040.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 ssh -n multinode-367040-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 ssh -n multinode-367040 "sudo cat /home/docker/cp-test_multinode-367040-m03_multinode-367040.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 cp multinode-367040-m03:/home/docker/cp-test.txt multinode-367040-m02:/home/docker/cp-test_multinode-367040-m03_multinode-367040-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 ssh -n multinode-367040-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 ssh -n multinode-367040-m02 "sudo cat /home/docker/cp-test_multinode-367040-m03_multinode-367040-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.04s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 node stop m03
E0906 23:57:21.515817   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/functional-369762/client.crt: no such file or directory
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-367040 node stop m03: (1.2794672s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-367040 status: exit status 7 (418.486819ms)

                                                
                                                
-- stdout --
	multinode-367040
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-367040-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-367040-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-367040 status --alsologtostderr: exit status 7 (430.900378ms)

                                                
                                                
-- stdout --
	multinode-367040
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-367040-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-367040-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 23:57:22.098864   27442 out.go:296] Setting OutFile to fd 1 ...
	I0906 23:57:22.099031   27442 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 23:57:22.099042   27442 out.go:309] Setting ErrFile to fd 2...
	I0906 23:57:22.099050   27442 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 23:57:22.099269   27442 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6521/.minikube/bin
	I0906 23:57:22.099453   27442 out.go:303] Setting JSON to false
	I0906 23:57:22.099492   27442 mustload.go:65] Loading cluster: multinode-367040
	I0906 23:57:22.099589   27442 notify.go:220] Checking for updates...
	I0906 23:57:22.099908   27442 config.go:182] Loaded profile config "multinode-367040": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
	I0906 23:57:22.099923   27442 status.go:255] checking status of multinode-367040 ...
	I0906 23:57:22.100311   27442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0906 23:57:22.100350   27442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:57:22.120211   27442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35083
	I0906 23:57:22.120670   27442 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:57:22.121245   27442 main.go:141] libmachine: Using API Version  1
	I0906 23:57:22.121264   27442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:57:22.121624   27442 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:57:22.121854   27442 main.go:141] libmachine: (multinode-367040) Calling .GetState
	I0906 23:57:22.123382   27442 status.go:330] multinode-367040 host status = "Running" (err=<nil>)
	I0906 23:57:22.123398   27442 host.go:66] Checking if "multinode-367040" exists ...
	I0906 23:57:22.123664   27442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0906 23:57:22.123701   27442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:57:22.138490   27442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33983
	I0906 23:57:22.138816   27442 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:57:22.139184   27442 main.go:141] libmachine: Using API Version  1
	I0906 23:57:22.139208   27442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:57:22.139479   27442 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:57:22.139619   27442 main.go:141] libmachine: (multinode-367040) Calling .GetIP
	I0906 23:57:22.142241   27442 main.go:141] libmachine: (multinode-367040) DBG | domain multinode-367040 has defined MAC address 52:54:00:8a:f6:e5 in network mk-multinode-367040
	I0906 23:57:22.142654   27442 main.go:141] libmachine: (multinode-367040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:f6:e5", ip: ""} in network mk-multinode-367040: {Iface:virbr1 ExpiryTime:2023-09-07 00:54:31 +0000 UTC Type:0 Mac:52:54:00:8a:f6:e5 Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:multinode-367040 Clientid:01:52:54:00:8a:f6:e5}
	I0906 23:57:22.142687   27442 main.go:141] libmachine: (multinode-367040) DBG | domain multinode-367040 has defined IP address 192.168.39.119 and MAC address 52:54:00:8a:f6:e5 in network mk-multinode-367040
	I0906 23:57:22.142808   27442 host.go:66] Checking if "multinode-367040" exists ...
	I0906 23:57:22.143087   27442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0906 23:57:22.143126   27442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:57:22.156406   27442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45945
	I0906 23:57:22.156805   27442 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:57:22.157239   27442 main.go:141] libmachine: Using API Version  1
	I0906 23:57:22.157257   27442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:57:22.157524   27442 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:57:22.157672   27442 main.go:141] libmachine: (multinode-367040) Calling .DriverName
	I0906 23:57:22.157851   27442 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 23:57:22.157885   27442 main.go:141] libmachine: (multinode-367040) Calling .GetSSHHostname
	I0906 23:57:22.160704   27442 main.go:141] libmachine: (multinode-367040) DBG | domain multinode-367040 has defined MAC address 52:54:00:8a:f6:e5 in network mk-multinode-367040
	I0906 23:57:22.161080   27442 main.go:141] libmachine: (multinode-367040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:f6:e5", ip: ""} in network mk-multinode-367040: {Iface:virbr1 ExpiryTime:2023-09-07 00:54:31 +0000 UTC Type:0 Mac:52:54:00:8a:f6:e5 Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:multinode-367040 Clientid:01:52:54:00:8a:f6:e5}
	I0906 23:57:22.161107   27442 main.go:141] libmachine: (multinode-367040) DBG | domain multinode-367040 has defined IP address 192.168.39.119 and MAC address 52:54:00:8a:f6:e5 in network mk-multinode-367040
	I0906 23:57:22.161223   27442 main.go:141] libmachine: (multinode-367040) Calling .GetSSHPort
	I0906 23:57:22.161393   27442 main.go:141] libmachine: (multinode-367040) Calling .GetSSHKeyPath
	I0906 23:57:22.161534   27442 main.go:141] libmachine: (multinode-367040) Calling .GetSSHUsername
	I0906 23:57:22.161664   27442 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6521/.minikube/machines/multinode-367040/id_rsa Username:docker}
	I0906 23:57:22.253732   27442 ssh_runner.go:195] Run: systemctl --version
	I0906 23:57:22.259699   27442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 23:57:22.277838   27442 kubeconfig.go:92] found "multinode-367040" server: "https://192.168.39.119:8443"
	I0906 23:57:22.277876   27442 api_server.go:166] Checking apiserver status ...
	I0906 23:57:22.277927   27442 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 23:57:22.292375   27442 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1055/cgroup
	I0906 23:57:22.300677   27442 api_server.go:182] apiserver freezer: "6:freezer:/kubepods/burstable/pod8ee2ac13df5927eda8d20ae33bbe5337/51de8fb756ab9307a4ca9a32b2795a2384cd854b64c58ff994a039863a5d6231"
	I0906 23:57:22.300743   27442 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod8ee2ac13df5927eda8d20ae33bbe5337/51de8fb756ab9307a4ca9a32b2795a2384cd854b64c58ff994a039863a5d6231/freezer.state
	I0906 23:57:22.311017   27442 api_server.go:204] freezer state: "THAWED"
	I0906 23:57:22.311044   27442 api_server.go:253] Checking apiserver healthz at https://192.168.39.119:8443/healthz ...
	I0906 23:57:22.316365   27442 api_server.go:279] https://192.168.39.119:8443/healthz returned 200:
	ok
	I0906 23:57:22.316387   27442 status.go:421] multinode-367040 apiserver status = Running (err=<nil>)
	I0906 23:57:22.316400   27442 status.go:257] multinode-367040 status: &{Name:multinode-367040 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 23:57:22.316424   27442 status.go:255] checking status of multinode-367040-m02 ...
	I0906 23:57:22.316707   27442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0906 23:57:22.316740   27442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:57:22.331336   27442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35349
	I0906 23:57:22.331764   27442 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:57:22.332207   27442 main.go:141] libmachine: Using API Version  1
	I0906 23:57:22.332229   27442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:57:22.332517   27442 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:57:22.332731   27442 main.go:141] libmachine: (multinode-367040-m02) Calling .GetState
	I0906 23:57:22.334289   27442 status.go:330] multinode-367040-m02 host status = "Running" (err=<nil>)
	I0906 23:57:22.334305   27442 host.go:66] Checking if "multinode-367040-m02" exists ...
	I0906 23:57:22.334591   27442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0906 23:57:22.334623   27442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:57:22.348398   27442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33001
	I0906 23:57:22.348710   27442 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:57:22.349205   27442 main.go:141] libmachine: Using API Version  1
	I0906 23:57:22.349228   27442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:57:22.349525   27442 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:57:22.349724   27442 main.go:141] libmachine: (multinode-367040-m02) Calling .GetIP
	I0906 23:57:22.352605   27442 main.go:141] libmachine: (multinode-367040-m02) DBG | domain multinode-367040-m02 has defined MAC address 52:54:00:c0:44:93 in network mk-multinode-367040
	I0906 23:57:22.353006   27442 main.go:141] libmachine: (multinode-367040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:44:93", ip: ""} in network mk-multinode-367040: {Iface:virbr1 ExpiryTime:2023-09-07 00:55:42 +0000 UTC Type:0 Mac:52:54:00:c0:44:93 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:multinode-367040-m02 Clientid:01:52:54:00:c0:44:93}
	I0906 23:57:22.353037   27442 main.go:141] libmachine: (multinode-367040-m02) DBG | domain multinode-367040-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:c0:44:93 in network mk-multinode-367040
	I0906 23:57:22.353182   27442 host.go:66] Checking if "multinode-367040-m02" exists ...
	I0906 23:57:22.353470   27442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0906 23:57:22.353502   27442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:57:22.367051   27442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38127
	I0906 23:57:22.367403   27442 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:57:22.367795   27442 main.go:141] libmachine: Using API Version  1
	I0906 23:57:22.367819   27442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:57:22.368068   27442 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:57:22.368221   27442 main.go:141] libmachine: (multinode-367040-m02) Calling .DriverName
	I0906 23:57:22.368392   27442 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 23:57:22.368414   27442 main.go:141] libmachine: (multinode-367040-m02) Calling .GetSSHHostname
	I0906 23:57:22.370748   27442 main.go:141] libmachine: (multinode-367040-m02) DBG | domain multinode-367040-m02 has defined MAC address 52:54:00:c0:44:93 in network mk-multinode-367040
	I0906 23:57:22.371123   27442 main.go:141] libmachine: (multinode-367040-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:44:93", ip: ""} in network mk-multinode-367040: {Iface:virbr1 ExpiryTime:2023-09-07 00:55:42 +0000 UTC Type:0 Mac:52:54:00:c0:44:93 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:multinode-367040-m02 Clientid:01:52:54:00:c0:44:93}
	I0906 23:57:22.371162   27442 main.go:141] libmachine: (multinode-367040-m02) DBG | domain multinode-367040-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:c0:44:93 in network mk-multinode-367040
	I0906 23:57:22.371279   27442 main.go:141] libmachine: (multinode-367040-m02) Calling .GetSSHPort
	I0906 23:57:22.371436   27442 main.go:141] libmachine: (multinode-367040-m02) Calling .GetSSHKeyPath
	I0906 23:57:22.371613   27442 main.go:141] libmachine: (multinode-367040-m02) Calling .GetSSHUsername
	I0906 23:57:22.371723   27442 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17174-6521/.minikube/machines/multinode-367040-m02/id_rsa Username:docker}
	I0906 23:57:22.457132   27442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 23:57:22.471314   27442 status.go:257] multinode-367040-m02 status: &{Name:multinode-367040-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0906 23:57:22.471348   27442 status.go:255] checking status of multinode-367040-m03 ...
	I0906 23:57:22.471658   27442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0906 23:57:22.471705   27442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:57:22.486234   27442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39513
	I0906 23:57:22.486678   27442 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:57:22.487130   27442 main.go:141] libmachine: Using API Version  1
	I0906 23:57:22.487152   27442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:57:22.487441   27442 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:57:22.487592   27442 main.go:141] libmachine: (multinode-367040-m03) Calling .GetState
	I0906 23:57:22.489242   27442 status.go:330] multinode-367040-m03 host status = "Stopped" (err=<nil>)
	I0906 23:57:22.489268   27442 status.go:343] host is not running, skipping remaining checks
	I0906 23:57:22.489276   27442 status.go:257] multinode-367040-m03 status: &{Name:multinode-367040-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (27.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-367040 node start m03 --alsologtostderr: (26.881135699s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (27.48s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (312.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-367040
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-367040
E0906 23:59:24.364111   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/ingress-addon-legacy-757297/client.crt: no such file or directory
E0906 23:59:52.047020   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/ingress-addon-legacy-757297/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-367040: (3m4.581040657s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-367040 --wait=true -v=8 --alsologtostderr
E0907 00:01:08.208320   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/client.crt: no such file or directory
E0907 00:01:53.830403   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/functional-369762/client.crt: no such file or directory
E0907 00:02:31.254139   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-367040 --wait=true -v=8 --alsologtostderr: (2m7.994043488s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-367040
--- PASS: TestMultiNode/serial/RestartKeepsNodes (312.66s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-367040 node delete m03: (1.15809483s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.67s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (183.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 stop
E0907 00:04:24.363665   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/ingress-addon-legacy-757297/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-367040 stop: (3m3.478894024s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-367040 status: exit status 7 (71.810649ms)

                                                
                                                
-- stdout --
	multinode-367040
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-367040-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-367040 status --alsologtostderr: exit status 7 (70.513847ms)

                                                
                                                
-- stdout --
	multinode-367040
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-367040-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0907 00:06:07.882715   29632 out.go:296] Setting OutFile to fd 1 ...
	I0907 00:06:07.882811   29632 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 00:06:07.882818   29632 out.go:309] Setting ErrFile to fd 2...
	I0907 00:06:07.882823   29632 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 00:06:07.883023   29632 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6521/.minikube/bin
	I0907 00:06:07.883177   29632 out.go:303] Setting JSON to false
	I0907 00:06:07.883208   29632 mustload.go:65] Loading cluster: multinode-367040
	I0907 00:06:07.883241   29632 notify.go:220] Checking for updates...
	I0907 00:06:07.883581   29632 config.go:182] Loaded profile config "multinode-367040": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
	I0907 00:06:07.883594   29632 status.go:255] checking status of multinode-367040 ...
	I0907 00:06:07.883900   29632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0907 00:06:07.883947   29632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:06:07.897892   29632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42163
	I0907 00:06:07.898313   29632 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:06:07.898901   29632 main.go:141] libmachine: Using API Version  1
	I0907 00:06:07.898925   29632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:06:07.899311   29632 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:06:07.899468   29632 main.go:141] libmachine: (multinode-367040) Calling .GetState
	I0907 00:06:07.900872   29632 status.go:330] multinode-367040 host status = "Stopped" (err=<nil>)
	I0907 00:06:07.900882   29632 status.go:343] host is not running, skipping remaining checks
	I0907 00:06:07.900887   29632 status.go:257] multinode-367040 status: &{Name:multinode-367040 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0907 00:06:07.900906   29632 status.go:255] checking status of multinode-367040-m02 ...
	I0907 00:06:07.901158   29632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0907 00:06:07.901189   29632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:06:07.914221   29632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46069
	I0907 00:06:07.914518   29632 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:06:07.914901   29632 main.go:141] libmachine: Using API Version  1
	I0907 00:06:07.914925   29632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:06:07.915241   29632 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:06:07.915384   29632 main.go:141] libmachine: (multinode-367040-m02) Calling .GetState
	I0907 00:06:07.916714   29632 status.go:330] multinode-367040-m02 host status = "Stopped" (err=<nil>)
	I0907 00:06:07.916729   29632 status.go:343] host is not running, skipping remaining checks
	I0907 00:06:07.916736   29632 status.go:257] multinode-367040-m02 status: &{Name:multinode-367040-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (183.62s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (92.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-367040 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0907 00:06:08.208861   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/client.crt: no such file or directory
E0907 00:06:53.829987   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/functional-369762/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-367040 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m31.805772608s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-367040 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (92.33s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (50.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-367040
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-367040-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-367040-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (55.303552ms)

                                                
                                                
-- stdout --
	* [multinode-367040-m02] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17174
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17174-6521/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6521/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-367040-m02' is duplicated with machine name 'multinode-367040-m02' in profile 'multinode-367040'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-367040-m03 --driver=kvm2  --container-runtime=containerd
E0907 00:08:16.876399   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/functional-369762/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-367040-m03 --driver=kvm2  --container-runtime=containerd: (48.969272355s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-367040
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-367040: exit status 80 (217.740379ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-367040
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-367040-m03 already exists in multinode-367040-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-367040-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (50.23s)

                                                
                                    
x
+
TestPreload (276.32s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-813262 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0907 00:09:24.363802   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/ingress-addon-legacy-757297/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-813262 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m47.874285404s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-813262 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-813262 image pull gcr.io/k8s-minikube/busybox: (2.452328473s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-813262
E0907 00:10:47.408395   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/ingress-addon-legacy-757297/client.crt: no such file or directory
E0907 00:11:08.209140   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/client.crt: no such file or directory
E0907 00:11:53.831938   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/functional-369762/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-813262: (1m31.567439453s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-813262 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-813262 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (1m13.186216829s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-813262 image list
helpers_test.go:175: Cleaning up "test-preload-813262" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-813262
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-813262: (1.037185279s)
--- PASS: TestPreload (276.32s)

                                                
                                    
x
+
TestScheduledStopUnix (119.58s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-086974 --memory=2048 --driver=kvm2  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-086974 --memory=2048 --driver=kvm2  --container-runtime=containerd: (48.062328974s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-086974 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-086974 -n scheduled-stop-086974
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-086974 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-086974 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-086974 -n scheduled-stop-086974
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-086974
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-086974 --schedule 15s
E0907 00:14:24.363876   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/ingress-addon-legacy-757297/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-086974
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-086974: exit status 7 (57.189657ms)

                                                
                                                
-- stdout --
	scheduled-stop-086974
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-086974 -n scheduled-stop-086974
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-086974 -n scheduled-stop-086974: exit status 7 (56.591515ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-086974" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-086974
--- PASS: TestScheduledStopUnix (119.58s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (232.13s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.22.0.3750783001.exe start -p running-upgrade-280645 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:132: (dbg) Done: /tmp/minikube-v1.22.0.3750783001.exe start -p running-upgrade-280645 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m41.10923332s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-280645 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:142: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-280645 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m6.987069047s)
helpers_test.go:175: Cleaning up "running-upgrade-280645" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-280645
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-280645: (1.644186774s)
--- PASS: TestRunningBinaryUpgrade (232.13s)

                                                
                                    
x
+
TestKubernetesUpgrade (176.6s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-440648 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-440648 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m8.380507553s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-440648
version_upgrade_test.go:239: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-440648: (2.117975727s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-440648 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-440648 status --format={{.Host}}: exit status 7 (69.639548ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-440648 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0907 00:16:53.830066   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/functional-369762/client.crt: no such file or directory
version_upgrade_test.go:255: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-440648 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m28.910785533s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-440648 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-440648 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-440648 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (86.048604ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-440648] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17174
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17174-6521/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6521/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-440648
	    minikube start -p kubernetes-upgrade-440648 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4406482 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.1, by running:
	    
	    minikube start -p kubernetes-upgrade-440648 --kubernetes-version=v1.28.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-440648 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:287: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-440648 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (15.711546698s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-440648" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-440648
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-440648: (1.254678111s)
--- PASS: TestKubernetesUpgrade (176.60s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.47s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (217.24s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.22.0.1554180027.exe start -p stopped-upgrade-163361 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E0907 00:16:08.209105   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/client.crt: no such file or directory
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.22.0.1554180027.exe start -p stopped-upgrade-163361 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m24.889009967s)
version_upgrade_test.go:204: (dbg) Run:  /tmp/minikube-v1.22.0.1554180027.exe -p stopped-upgrade-163361 stop
version_upgrade_test.go:204: (dbg) Done: /tmp/minikube-v1.22.0.1554180027.exe -p stopped-upgrade-163361 stop: (5.114165512s)
version_upgrade_test.go:210: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-163361 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:210: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-163361 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m7.232408937s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (217.24s)

                                                
                                    
x
+
TestPause/serial/Start (67.98s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-132820 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-132820 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m7.981523432s)
--- PASS: TestPause/serial/Start (67.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-808158 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-808158 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (74.943891ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-808158] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17174
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17174-6521/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6521/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (69.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-808158 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-808158 --driver=kvm2  --container-runtime=containerd: (1m9.535518648s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-808158 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (69.81s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (38.71s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-132820 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-132820 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (38.699715244s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (38.71s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-163361
version_upgrade_test.go:218: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-163361: (1.425129756s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-639720 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-639720 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (562.649052ms)

                                                
                                                
-- stdout --
	* [false-639720] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17174
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17174-6521/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6521/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0907 00:18:54.408146   36610 out.go:296] Setting OutFile to fd 1 ...
	I0907 00:18:54.408309   36610 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 00:18:54.408321   36610 out.go:309] Setting ErrFile to fd 2...
	I0907 00:18:54.408328   36610 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0907 00:18:54.408616   36610 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-6521/.minikube/bin
	I0907 00:18:54.409332   36610 out.go:303] Setting JSON to false
	I0907 00:18:54.410593   36610 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3681,"bootTime":1694042254,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0907 00:18:54.410665   36610 start.go:138] virtualization: kvm guest
	I0907 00:18:54.502011   36610 out.go:177] * [false-639720] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0907 00:18:54.536271   36610 out.go:177]   - MINIKUBE_LOCATION=17174
	I0907 00:18:54.536207   36610 notify.go:220] Checking for updates...
	I0907 00:18:54.699669   36610 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0907 00:18:54.790035   36610 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17174-6521/kubeconfig
	I0907 00:18:54.791884   36610 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-6521/.minikube
	I0907 00:18:54.793612   36610 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0907 00:18:54.795406   36610 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0907 00:18:54.797775   36610 config.go:182] Loaded profile config "NoKubernetes-808158": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
	I0907 00:18:54.798073   36610 config.go:182] Loaded profile config "pause-132820": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
	I0907 00:18:54.798205   36610 config.go:182] Loaded profile config "running-upgrade-280645": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.2
	I0907 00:18:54.798324   36610 driver.go:373] Setting default libvirt URI to qemu:///system
	I0907 00:18:54.911987   36610 out.go:177] * Using the kvm2 driver based on user configuration
	I0907 00:18:54.913697   36610 start.go:298] selected driver: kvm2
	I0907 00:18:54.913717   36610 start.go:902] validating driver "kvm2" against <nil>
	I0907 00:18:54.913731   36610 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0907 00:18:54.916097   36610 out.go:177] 
	W0907 00:18:54.917517   36610 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0907 00:18:54.918883   36610 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-639720 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-639720

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-639720

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-639720

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-639720

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-639720

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-639720

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-639720

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-639720

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-639720

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-639720

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639720"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639720"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639720"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-639720

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639720"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639720"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-639720" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-639720" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-639720" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-639720" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-639720" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-639720" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-639720" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-639720" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639720"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639720"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639720"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639720"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639720"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-639720" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-639720" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-639720" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639720"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639720"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639720"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639720"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639720"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17174-6521/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 07 Sep 2023 00:18:52 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.83.117:8443
name: pause-132820
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17174-6521/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 07 Sep 2023 00:18:57 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.72.155:8443
name: running-upgrade-280645
contexts:
- context:
cluster: pause-132820
extensions:
- extension:
last-update: Thu, 07 Sep 2023 00:18:52 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: pause-132820
name: pause-132820
- context:
cluster: running-upgrade-280645
extensions:
- extension:
last-update: Thu, 07 Sep 2023 00:18:57 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: running-upgrade-280645
name: running-upgrade-280645
current-context: running-upgrade-280645
kind: Config
preferences: {}
users:
- name: pause-132820
user:
client-certificate: /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/pause-132820/client.crt
client-key: /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/pause-132820/client.key
- name: running-upgrade-280645
user:
client-certificate: /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/running-upgrade-280645/client.crt
client-key: /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/running-upgrade-280645/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-639720

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639720"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639720"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639720"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639720"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639720"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639720"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639720"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639720"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639720"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639720"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639720"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639720"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639720"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639720"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639720"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639720"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639720"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639720"

                                                
                                                
----------------------- debugLogs end: false-639720 [took: 3.717030051s] --------------------------------
helpers_test.go:175: Cleaning up "false-639720" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-639720
--- PASS: TestNetworkPlugins/group/false (4.43s)

                                                
                                    
x
+
TestPause/serial/Pause (0.81s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-132820 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.81s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-132820 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-132820 --output=json --layout=cluster: exit status 2 (275.436835ms)

                                                
                                                
-- stdout --
	{"Name":"pause-132820","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-132820","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.28s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-132820 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (59.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-808158 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-808158 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (58.442825114s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-808158 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-808158 status -o json: exit status 2 (216.4935ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-808158","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-808158
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (59.66s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.72s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-132820 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.72s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.97s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-132820 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.97s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.23s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (57.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-808158 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-808158 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (57.007400706s)
--- PASS: TestNoKubernetes/serial/Start (57.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (395.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-492968 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-492968 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (6m35.206916732s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (395.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (143.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-171085 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.1
E0907 00:21:08.208896   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-171085 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.1: (2m23.149727096s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (143.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-808158 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-808158 "sudo systemctl is-active --quiet service kubelet": exit status 1 (205.28487ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-808158
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-808158: (1.163955s)
--- PASS: TestNoKubernetes/serial/Stop (1.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (70.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-808158 --driver=kvm2  --container-runtime=containerd
E0907 00:21:53.831690   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/functional-369762/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-808158 --driver=kvm2  --container-runtime=containerd: (1m10.938256948s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (70.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-808158 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-808158 "sudo systemctl is-active --quiet service kubelet": exit status 1 (203.750578ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (103.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-019378 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-019378 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.1: (1m43.30359691s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (103.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-171085 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a1e7ce47-ba9e-4082-84ac-b9720de76c21] Pending
helpers_test.go:344: "busybox" [a1e7ce47-ba9e-4082-84ac-b9720de76c21] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a1e7ce47-ba9e-4082-84ac-b9720de76c21] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.043321603s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-171085 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-171085 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-171085 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.177227821s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-171085 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (92.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-171085 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-171085 --alsologtostderr -v=3: (1m32.0478856s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (92.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (63.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-415299 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-415299 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.1: (1m3.396453674s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (63.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-019378 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [85d472f9-fc20-4dbf-94d5-3cb95d21e0e2] Pending
helpers_test.go:344: "busybox" [85d472f9-fc20-4dbf-94d5-3cb95d21e0e2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [85d472f9-fc20-4dbf-94d5-3cb95d21e0e2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.035330755s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-019378 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-019378 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-019378 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.009635834s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-019378 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (92.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-019378 --alsologtostderr -v=3
E0907 00:24:24.363953   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/ingress-addon-legacy-757297/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-019378 --alsologtostderr -v=3: (1m32.173087197s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (92.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-415299 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7375cd79-b20c-4a7a-863f-1fd4b4b4e577] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7375cd79-b20c-4a7a-863f-1fd4b4b4e577] Running
E0907 00:24:56.876913   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/functional-369762/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.034589868s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-415299 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-415299 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-415299 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.007207615s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-415299 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-415299 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-415299 --alsologtostderr -v=3: (1m31.789750682s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-171085 -n no-preload-171085
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-171085 -n no-preload-171085: exit status 7 (56.80638ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-171085 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (307.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-171085 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-171085 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.1: (5m7.148383215s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-171085 -n no-preload-171085
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (307.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-019378 -n embed-certs-019378
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-019378 -n embed-certs-019378: exit status 7 (67.384577ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-019378 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (306.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-019378 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.1
E0907 00:26:08.208134   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-019378 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.1: (5m6.477811089s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-019378 -n embed-certs-019378
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (306.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-415299 -n default-k8s-diff-port-415299
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-415299 -n default-k8s-diff-port-415299: exit status 7 (71.208724ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-415299 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (330.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-415299 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-415299 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.1: (5m29.90879846s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-415299 -n default-k8s-diff-port-415299
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (330.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-492968 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4d54749b-4b09-4512-90a1-3132ffef7dc1] Pending
helpers_test.go:344: "busybox" [4d54749b-4b09-4512-90a1-3132ffef7dc1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0907 00:26:53.830345   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/functional-369762/client.crt: no such file or directory
helpers_test.go:344: "busybox" [4d54749b-4b09-4512-90a1-3132ffef7dc1] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.03648937s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-492968 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-492968 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-492968 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (91.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-492968 --alsologtostderr -v=3
E0907 00:27:27.409150   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/ingress-addon-legacy-757297/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-492968 --alsologtostderr -v=3: (1m31.894208572s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (91.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-492968 -n old-k8s-version-492968
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-492968 -n old-k8s-version-492968: exit status 7 (59.540669ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-492968 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (457.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-492968 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
E0907 00:29:24.363761   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/ingress-addon-legacy-757297/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-492968 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (7m37.369317717s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-492968 -n old-k8s-version-492968
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (457.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-9b7l6" [7ad44f75-176c-40a8-a199-038d0ed723ac] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.019408137s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-9b7l6" [7ad44f75-176c-40a8-a199-038d0ed723ac] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012152555s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-171085 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-171085 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-171085 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-171085 -n no-preload-171085
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-171085 -n no-preload-171085: exit status 2 (249.049285ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-171085 -n no-preload-171085
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-171085 -n no-preload-171085: exit status 2 (248.48855ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-171085 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-171085 -n no-preload-171085
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-171085 -n no-preload-171085
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (63.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-410667 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-410667 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.1: (1m3.038254016s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (63.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-fcqpq" [c0063cf7-4797-4290-a528-bc526b821296] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.021163456s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-fcqpq" [c0063cf7-4797-4290-a528-bc526b821296] Running
E0907 00:31:08.209071   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012558093s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-019378 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-019378 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-019378 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-019378 -n embed-certs-019378
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-019378 -n embed-certs-019378: exit status 2 (252.371373ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-019378 -n embed-certs-019378
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-019378 -n embed-certs-019378: exit status 2 (247.71096ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-019378 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-019378 -n embed-certs-019378
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-019378 -n embed-certs-019378
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (101.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-639720 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-639720 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (1m41.393390267s)
--- PASS: TestNetworkPlugins/group/auto/Start (101.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-410667 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-410667 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.451417446s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-410667 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-410667 --alsologtostderr -v=3: (2.233728222s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-410667 -n newest-cni-410667
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-410667 -n newest-cni-410667: exit status 7 (115.799865ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-410667 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (50.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-410667 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.1
E0907 00:31:53.829809   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/functional-369762/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-410667 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.1: (50.193005961s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-410667 -n newest-cni-410667
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (50.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (16.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-5szlt" [91c7f4c8-990e-4d86-88e6-dc9cb684ffa0] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-5szlt" [91c7f4c8-990e-4d86-88e6-dc9cb684ffa0] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.022029649s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (16.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-5szlt" [91c7f4c8-990e-4d86-88e6-dc9cb684ffa0] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011984083s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-415299 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-415299 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-415299 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-415299 -n default-k8s-diff-port-415299
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-415299 -n default-k8s-diff-port-415299: exit status 2 (243.671097ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-415299 -n default-k8s-diff-port-415299
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-415299 -n default-k8s-diff-port-415299: exit status 2 (238.251797ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-415299 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-415299 -n default-k8s-diff-port-415299
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-415299 -n default-k8s-diff-port-415299
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (76.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-639720 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-639720 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m16.986479761s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (76.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-410667 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-410667 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-410667 -n newest-cni-410667
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-410667 -n newest-cni-410667: exit status 2 (239.30654ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-410667 -n newest-cni-410667
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-410667 -n newest-cni-410667: exit status 2 (241.810994ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-410667 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-410667 -n newest-cni-410667
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-410667 -n newest-cni-410667
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (127.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-639720 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-639720 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (2m7.986814015s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (127.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-639720 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-639720 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-gbd46" [ab412f8d-bd95-4361-97e8-ddac135ccef9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-gbd46" [ab412f8d-bd95-4361-97e8-ddac135ccef9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.037861472s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-639720 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-639720 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-639720 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (92.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-639720 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
E0907 00:33:31.428915   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/no-preload-171085/client.crt: no such file or directory
E0907 00:33:31.434213   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/no-preload-171085/client.crt: no such file or directory
E0907 00:33:31.444540   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/no-preload-171085/client.crt: no such file or directory
E0907 00:33:31.464787   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/no-preload-171085/client.crt: no such file or directory
E0907 00:33:31.505066   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/no-preload-171085/client.crt: no such file or directory
E0907 00:33:31.585636   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/no-preload-171085/client.crt: no such file or directory
E0907 00:33:31.746443   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/no-preload-171085/client.crt: no such file or directory
E0907 00:33:32.067065   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/no-preload-171085/client.crt: no such file or directory
E0907 00:33:32.708150   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/no-preload-171085/client.crt: no such file or directory
E0907 00:33:33.988931   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/no-preload-171085/client.crt: no such file or directory
E0907 00:33:36.549990   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/no-preload-171085/client.crt: no such file or directory
E0907 00:33:41.670644   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/no-preload-171085/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-639720 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m32.551682008s)
--- PASS: TestNetworkPlugins/group/flannel/Start (92.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-lmtxt" [b594f322-2a43-41be-8c48-b18aafab1d50] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.025085063s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-639720 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-639720 replace --force -f testdata/netcat-deployment.yaml
E0907 00:33:51.911081   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/no-preload-171085/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kd5dn" [64379775-182a-45b5-ba05-babb4a6b4184] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-kd5dn" [64379775-182a-45b5-ba05-babb4a6b4184] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.013454766s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-639720 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-639720 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-639720 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (99.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-639720 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
E0907 00:34:24.363658   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/ingress-addon-legacy-757297/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-639720 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m39.782441508s)
--- PASS: TestNetworkPlugins/group/calico/Start (99.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-639720 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-639720 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-frbvq" [abe47095-187b-4182-bfa0-fb2917ba0bb7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-frbvq" [abe47095-187b-4182-bfa0-fb2917ba0bb7] Running
E0907 00:34:51.748339   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/default-k8s-diff-port-415299/client.crt: no such file or directory
E0907 00:34:51.753609   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/default-k8s-diff-port-415299/client.crt: no such file or directory
E0907 00:34:51.763861   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/default-k8s-diff-port-415299/client.crt: no such file or directory
E0907 00:34:51.784959   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/default-k8s-diff-port-415299/client.crt: no such file or directory
E0907 00:34:51.825331   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/default-k8s-diff-port-415299/client.crt: no such file or directory
E0907 00:34:51.906034   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/default-k8s-diff-port-415299/client.crt: no such file or directory
E0907 00:34:52.066887   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/default-k8s-diff-port-415299/client.crt: no such file or directory
E0907 00:34:52.387445   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/default-k8s-diff-port-415299/client.crt: no such file or directory
E0907 00:34:53.027718   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/default-k8s-diff-port-415299/client.crt: no such file or directory
E0907 00:34:53.352471   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/no-preload-171085/client.crt: no such file or directory
E0907 00:34:54.307867   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/default-k8s-diff-port-415299/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.011766426s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-vs8pg" [9de26657-5990-4ba9-a24d-3c77f4ffaabb] Running
E0907 00:34:56.868122   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/default-k8s-diff-port-415299/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.030050396s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-639720 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-639720 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-639720 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-639720 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-639720 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-m68fl" [54996604-1067-45d6-97c2-896b3b84de73] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0907 00:35:01.988510   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/default-k8s-diff-port-415299/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-m68fl" [54996604-1067-45d6-97c2-896b3b84de73] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.012077893s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-639720 exec deployment/netcat -- nslookup kubernetes.default
E0907 00:35:12.228648   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/default-k8s-diff-port-415299/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-639720 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-639720 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (106.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-639720 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-639720 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m46.552578804s)
--- PASS: TestNetworkPlugins/group/bridge/Start (106.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (102.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-639720 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
E0907 00:35:32.709660   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/default-k8s-diff-port-415299/client.crt: no such file or directory
E0907 00:35:51.256085   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-639720 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m42.244618256s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (102.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-g9kqc" [77fedc56-95c2-4062-ad27-912ad02577c2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.026519946s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-639720 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-639720 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7m84m" [3dd11889-b0e6-429a-b99c-bb9b40a8e2ef] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-7m84m" [3dd11889-b0e6-429a-b99c-bb9b40a8e2ef] Running
E0907 00:36:08.208706   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/addons-594533/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.012960336s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-639720 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-639720 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-639720 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-gg5b6" [a176cffa-5a96-4421-9ce2-20580866a7d8] Running
E0907 00:36:15.272976   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/no-preload-171085/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.019975388s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-gg5b6" [a176cffa-5a96-4421-9ce2-20580866a7d8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.020314876s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-492968 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-492968 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-492968 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-492968 -n old-k8s-version-492968
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-492968 -n old-k8s-version-492968: exit status 2 (255.181462ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-492968 -n old-k8s-version-492968
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-492968 -n old-k8s-version-492968: exit status 2 (274.433565ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-492968 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-492968 -n old-k8s-version-492968
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-492968 -n old-k8s-version-492968
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-639720 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-639720 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-s6bhh" [57459976-ac27-424c-a7c9-941cf9356794] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0907 00:37:02.669975   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/old-k8s-version-492968/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-s6bhh" [57459976-ac27-424c-a7c9-941cf9356794] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.009184557s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-639720 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-639720 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-639720 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-639720 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-639720 replace --force -f testdata/netcat-deployment.yaml
E0907 00:37:12.910390   13704 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/old-k8s-version-492968/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9r685" [2fce6c49-f7e3-47c3-aa76-db4e9c049287] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-9r685" [2fce6c49-f7e3-47c3-aa76-db4e9c049287] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.008780748s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-639720 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-639720 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-639720 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    

Test skip (36/302)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.28.1/cached-images 0
13 TestDownloadOnly/v1.28.1/binaries 0
14 TestDownloadOnly/v1.28.1/kubectl 0
18 TestDownloadOnlyKic 0
29 TestAddons/parallel/Olm 0
39 TestDockerFlags 0
42 TestDockerEnvContainerd 0
44 TestHyperKitDriverInstallOrUpdate 0
45 TestHyperkitDriverSkipUpgrade 0
96 TestFunctional/parallel/DockerEnv 0
97 TestFunctional/parallel/PodmanEnv 0
126 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
127 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
128 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
145 TestGvisorAddon 0
146 TestImageBuild 0
179 TestKicCustomNetwork 0
180 TestKicExistingNetwork 0
181 TestKicCustomSubnet 0
182 TestKicStaticIP 0
213 TestChangeNoneUser 0
216 TestScheduledStopWindows 0
218 TestSkaffold 0
220 TestInsufficientStorage 0
224 TestMissingContainerUpgrade 0
231 TestStartStop/group/disable-driver-mounts 0.12
243 TestNetworkPlugins/group/kubenet 3.56
251 TestNetworkPlugins/group/cilium 3.32
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:296: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-596932" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-596932
--- SKIP: TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:522: 
----------------------- debugLogs start: kubenet-639720 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-639720

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-639720

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-639720

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-639720

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-639720

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-639720

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-639720

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-639720

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-639720

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-639720

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639720"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639720"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639720"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-639720

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639720"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639720"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-639720" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-639720" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-639720" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-639720" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-639720" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-639720" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-639720" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-639720" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639720"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639720"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639720"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639720"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639720"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-639720" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-639720" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-639720" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639720"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639720"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639720"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639720"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639720"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17174-6521/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 07 Sep 2023 00:18:52 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.83.117:8443
name: pause-132820
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17174-6521/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 07 Sep 2023 00:18:36 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.72.155:8443
name: running-upgrade-280645
contexts:
- context:
cluster: pause-132820
extensions:
- extension:
last-update: Thu, 07 Sep 2023 00:18:52 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: pause-132820
name: pause-132820
- context:
cluster: running-upgrade-280645
user: running-upgrade-280645
name: running-upgrade-280645
current-context: pause-132820
kind: Config
preferences: {}
users:
- name: pause-132820
user:
client-certificate: /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/pause-132820/client.crt
client-key: /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/pause-132820/client.key
- name: running-upgrade-280645
user:
client-certificate: /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/running-upgrade-280645/client.crt
client-key: /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/running-upgrade-280645/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-639720

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639720"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639720"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639720"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639720"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639720"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639720"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639720"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639720"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639720"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639720"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639720"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639720"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639720"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639720"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639720"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639720"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639720"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639720"

                                                
                                                
----------------------- debugLogs end: kubenet-639720 [took: 3.378808414s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-639720" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-639720
--- SKIP: TestNetworkPlugins/group/kubenet (3.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-639720 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-639720

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-639720

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-639720

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-639720

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-639720

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-639720

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-639720

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-639720

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-639720

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-639720

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639720"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639720"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639720"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-639720

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639720"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639720"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-639720" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-639720" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-639720" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-639720" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-639720" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-639720" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-639720" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-639720" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639720"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639720"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639720"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639720"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639720"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-639720

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-639720

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-639720" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-639720" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-639720

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-639720

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-639720" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-639720" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-639720" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-639720" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-639720" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639720"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639720"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639720"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639720"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639720"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17174-6521/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 07 Sep 2023 00:18:52 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.83.117:8443
name: pause-132820
contexts:
- context:
cluster: pause-132820
extensions:
- extension:
last-update: Thu, 07 Sep 2023 00:18:52 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: pause-132820
name: pause-132820
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-132820
user:
client-certificate: /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/pause-132820/client.crt
client-key: /home/jenkins/minikube-integration/17174-6521/.minikube/profiles/pause-132820/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-639720

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639720"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639720"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639720"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639720"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639720"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639720"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639720"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639720"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639720"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639720"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639720"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639720"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639720"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639720"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639720"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639720"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639720"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-639720" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639720"

                                                
                                                
----------------------- debugLogs end: cilium-639720 [took: 3.168140386s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-639720" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-639720
--- SKIP: TestNetworkPlugins/group/cilium (3.32s)

                                                
                                    
Copied to clipboard