Test Report: KVM_Linux_containerd 17314

                    
                      720b04249cd58de6fa013ef84ee34e212d9c3117:2023-10-06:31319
                    
                

Test fail (2/306)

Order failed test Duration
28 TestAddons/parallel/Ingress 33.93
52 TestErrorSpam/setup 62.97
x
+
TestAddons/parallel/Ingress (33.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-565340 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-565340 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-565340 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [1e863b91-9ca2-4b5f-be87-cd64cbb6debd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [1e863b91-9ca2-4b5f-be87-cd64cbb6debd] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 21.015346667s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-565340 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context addons-565340 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-565340 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.147
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-565340 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-565340 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (382.687207ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 01:05:27.327405   76226 out.go:296] Setting OutFile to fd 1 ...
	I1006 01:05:27.327507   76226 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 01:05:27.327515   76226 out.go:309] Setting ErrFile to fd 2...
	I1006 01:05:27.327520   76226 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 01:05:27.327683   76226 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-66550/.minikube/bin
	I1006 01:05:27.327950   76226 mustload.go:65] Loading cluster: addons-565340
	I1006 01:05:27.328299   76226 config.go:182] Loaded profile config "addons-565340": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
	I1006 01:05:27.328321   76226 addons.go:594] checking whether the cluster is paused
	I1006 01:05:27.328448   76226 config.go:182] Loaded profile config "addons-565340": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
	I1006 01:05:27.328463   76226 host.go:66] Checking if "addons-565340" exists ...
	I1006 01:05:27.328813   76226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:05:27.328856   76226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:05:27.342910   76226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42159
	I1006 01:05:27.343401   76226 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:05:27.344024   76226 main.go:141] libmachine: Using API Version  1
	I1006 01:05:27.344059   76226 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:05:27.344417   76226 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:05:27.344594   76226 main.go:141] libmachine: (addons-565340) Calling .GetState
	I1006 01:05:27.346246   76226 main.go:141] libmachine: (addons-565340) Calling .DriverName
	I1006 01:05:27.346481   76226 ssh_runner.go:195] Run: systemctl --version
	I1006 01:05:27.346503   76226 main.go:141] libmachine: (addons-565340) Calling .GetSSHHostname
	I1006 01:05:27.348909   76226 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:05:27.349320   76226 main.go:141] libmachine: (addons-565340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:8f:59", ip: ""} in network mk-addons-565340: {Iface:virbr1 ExpiryTime:2023-10-06 02:01:13 +0000 UTC Type:0 Mac:52:54:00:e8:8f:59 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-565340 Clientid:01:52:54:00:e8:8f:59}
	I1006 01:05:27.349354   76226 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined IP address 192.168.39.147 and MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:05:27.349481   76226 main.go:141] libmachine: (addons-565340) Calling .GetSSHPort
	I1006 01:05:27.349645   76226 main.go:141] libmachine: (addons-565340) Calling .GetSSHKeyPath
	I1006 01:05:27.349806   76226 main.go:141] libmachine: (addons-565340) Calling .GetSSHUsername
	I1006 01:05:27.349953   76226 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-66550/.minikube/machines/addons-565340/id_rsa Username:docker}
	I1006 01:05:27.440930   76226 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1006 01:05:27.440998   76226 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 01:05:27.545349   76226 cri.go:89] found id: "c822729a606349f57deacd260d705504def176af9d944f561296a0eabc7852dd"
	I1006 01:05:27.545376   76226 cri.go:89] found id: "9e469c3ef356c9762042217ebc16f79300e603e160f581df8d61168ca8e66c86"
	I1006 01:05:27.545383   76226 cri.go:89] found id: "2a6de8adf503a1b845ca284de10949da31f134ffbf6f5727f6442ea4085c61a7"
	I1006 01:05:27.545391   76226 cri.go:89] found id: "41b9b18ebba0b835d85ba449ee807a6c38924e7e5ec41e9dfbb825c5c14cd50e"
	I1006 01:05:27.545396   76226 cri.go:89] found id: "ef951384e4ba718a8e7441d7ef6df3bf46b26741eb2a59d6a477c79fe2177b7c"
	I1006 01:05:27.545406   76226 cri.go:89] found id: "9272272d3fddbb79fa1e9306d1e6c715ebd0b988ac691c7e419806d0498a507d"
	I1006 01:05:27.545411   76226 cri.go:89] found id: "88c6f06234ae2dfc1bebea85d5c06fefce47458553bb245edad7416307f132e9"
	I1006 01:05:27.545417   76226 cri.go:89] found id: "f5c20850de2d79a5024e281bbbc04b9de3f80ff217274962762955e441bdca29"
	I1006 01:05:27.545422   76226 cri.go:89] found id: "1d05e541205d90c0c2d466d9d0af323bab2e2e098907968c1f9571da5a0510cb"
	I1006 01:05:27.545430   76226 cri.go:89] found id: "017b57f8a013fc67f0e5b53200311ebd1a8fbeffbc83f9071e38643da9315173"
	I1006 01:05:27.545436   76226 cri.go:89] found id: "4381d8fa89f72470d2271da46fdb192edcb8e01e6b5738c5da0f517eb044fb67"
	I1006 01:05:27.545441   76226 cri.go:89] found id: "433f8cf74022dc2447e02f2c8b8603d7c44ca70774b3a76f0475900f9089f07b"
	I1006 01:05:27.545451   76226 cri.go:89] found id: "f1514a7c6543a765c70f4197a225f413c2881da151957df91d93d339f60b54fe"
	I1006 01:05:27.545465   76226 cri.go:89] found id: "0ab9501ba3a1f02fd8d7a99e31efa6394d93bfcf88c7cc9c66c86e0095c0a678"
	I1006 01:05:27.545475   76226 cri.go:89] found id: "368f71dbcb2d3034f059177e098c101a602f3e6b24e9fab70a1531fdfcd8d361"
	I1006 01:05:27.545480   76226 cri.go:89] found id: "43ba6e202cd6efdc55545d31bcf63a8c1ca7fa2f493d0fa62558e9055c15c8df"
	I1006 01:05:27.545489   76226 cri.go:89] found id: "14329d92c3287792021d19cf8dbf2f31e0784d71acab3814de0be076b35ed5c7"
	I1006 01:05:27.545499   76226 cri.go:89] found id: "c1c8515aedefb6b3d23a49e03c91046aab5360b155b1295f18f778f0916bde26"
	I1006 01:05:27.545507   76226 cri.go:89] found id: "7052bbc0ce89ab31534ba7490a91c018f17bf7232ea6f412bb4abc686af0a6e2"
	I1006 01:05:27.545515   76226 cri.go:89] found id: "b8d31b5e3d619da81b9537ec129390fed9dc1e7117fe4355a6bab99688910a12"
	I1006 01:05:27.545520   76226 cri.go:89] found id: ""
	I1006 01:05:27.545561   76226 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1006 01:05:27.633843   76226 main.go:141] libmachine: Making call to close driver server
	I1006 01:05:27.633860   76226 main.go:141] libmachine: (addons-565340) Calling .Close
	I1006 01:05:27.634110   76226 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:05:27.634133   76226 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:05:27.634134   76226 main.go:141] libmachine: (addons-565340) DBG | Closing plugin on server side
	I1006 01:05:27.636161   76226 out.go:177] 
	W1006 01:05:27.637494   76226 out.go:239] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-10-06T01:05:27Z" level=error msg="stat /run/containerd/runc/k8s.io/c822729a606349f57deacd260d705504def176af9d944f561296a0eabc7852dd: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-10-06T01:05:27Z" level=error msg="stat /run/containerd/runc/k8s.io/c822729a606349f57deacd260d705504def176af9d944f561296a0eabc7852dd: no such file or directory"
	
	W1006 01:05:27.637511   76226 out.go:239] * 
	* 
	W1006 01:05:27.640059   76226 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1006 01:05:27.641389   76226 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:307: failed to disable ingress-dns addon. args "out/minikube-linux-amd64 -p addons-565340 addons disable ingress-dns --alsologtostderr -v=1" : exit status 11
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-565340 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-565340 addons disable ingress --alsologtostderr -v=1: (7.872552845s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-565340 -n addons-565340
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-565340 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-565340 logs -n 25: (2.045680102s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-034185 | jenkins | v1.31.2 | 06 Oct 23 00:59 UTC |                     |
	|         | -p download-only-034185                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-034185 | jenkins | v1.31.2 | 06 Oct 23 01:00 UTC |                     |
	|         | -p download-only-034185                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.31.2 | 06 Oct 23 01:00 UTC | 06 Oct 23 01:00 UTC |
	| delete  | -p download-only-034185                                                                     | download-only-034185 | jenkins | v1.31.2 | 06 Oct 23 01:00 UTC | 06 Oct 23 01:00 UTC |
	| delete  | -p download-only-034185                                                                     | download-only-034185 | jenkins | v1.31.2 | 06 Oct 23 01:00 UTC | 06 Oct 23 01:00 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-300718 | jenkins | v1.31.2 | 06 Oct 23 01:00 UTC |                     |
	|         | binary-mirror-300718                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:33405                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-300718                                                                     | binary-mirror-300718 | jenkins | v1.31.2 | 06 Oct 23 01:00 UTC | 06 Oct 23 01:00 UTC |
	| addons  | enable dashboard -p                                                                         | addons-565340        | jenkins | v1.31.2 | 06 Oct 23 01:00 UTC |                     |
	|         | addons-565340                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-565340        | jenkins | v1.31.2 | 06 Oct 23 01:00 UTC |                     |
	|         | addons-565340                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-565340 --wait=true                                                                | addons-565340        | jenkins | v1.31.2 | 06 Oct 23 01:00 UTC | 06 Oct 23 01:04 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-565340 addons                                                                        | addons-565340        | jenkins | v1.31.2 | 06 Oct 23 01:04 UTC | 06 Oct 23 01:04 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-565340        | jenkins | v1.31.2 | 06 Oct 23 01:04 UTC | 06 Oct 23 01:05 UTC |
	|         | addons-565340                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-565340 ssh cat                                                                       | addons-565340        | jenkins | v1.31.2 | 06 Oct 23 01:05 UTC | 06 Oct 23 01:05 UTC |
	|         | /opt/local-path-provisioner/pvc-0eee7991-a6eb-4053-90a9-4fed6ee19f1f_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-565340 addons disable                                                                | addons-565340        | jenkins | v1.31.2 | 06 Oct 23 01:05 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-565340 ip                                                                            | addons-565340        | jenkins | v1.31.2 | 06 Oct 23 01:05 UTC | 06 Oct 23 01:05 UTC |
	| addons  | addons-565340 addons disable                                                                | addons-565340        | jenkins | v1.31.2 | 06 Oct 23 01:05 UTC | 06 Oct 23 01:05 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-565340        | jenkins | v1.31.2 | 06 Oct 23 01:05 UTC | 06 Oct 23 01:05 UTC |
	|         | -p addons-565340                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-565340        | jenkins | v1.31.2 | 06 Oct 23 01:05 UTC | 06 Oct 23 01:05 UTC |
	|         | addons-565340                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-565340 ssh curl -s                                                                   | addons-565340        | jenkins | v1.31.2 | 06 Oct 23 01:05 UTC | 06 Oct 23 01:05 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-565340 ip                                                                            | addons-565340        | jenkins | v1.31.2 | 06 Oct 23 01:05 UTC | 06 Oct 23 01:05 UTC |
	| addons  | addons-565340 addons disable                                                                | addons-565340        | jenkins | v1.31.2 | 06 Oct 23 01:05 UTC |                     |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-565340 addons disable                                                                | addons-565340        | jenkins | v1.31.2 | 06 Oct 23 01:05 UTC | 06 Oct 23 01:05 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-565340 addons disable                                                                | addons-565340        | jenkins | v1.31.2 | 06 Oct 23 01:05 UTC | 06 Oct 23 01:05 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-565340        | jenkins | v1.31.2 | 06 Oct 23 01:05 UTC | 06 Oct 23 01:05 UTC |
	|         | -p addons-565340                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/06 01:00:56
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 01:00:56.913681   74302 out.go:296] Setting OutFile to fd 1 ...
	I1006 01:00:56.913826   74302 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 01:00:56.913839   74302 out.go:309] Setting ErrFile to fd 2...
	I1006 01:00:56.913848   74302 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 01:00:56.914088   74302 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-66550/.minikube/bin
	I1006 01:00:56.914731   74302 out.go:303] Setting JSON to false
	I1006 01:00:56.915617   74302 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":6200,"bootTime":1696547857,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 01:00:56.915677   74302 start.go:138] virtualization: kvm guest
	I1006 01:00:56.917877   74302 out.go:177] * [addons-565340] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1006 01:00:56.920085   74302 notify.go:220] Checking for updates...
	I1006 01:00:56.921514   74302 out.go:177]   - MINIKUBE_LOCATION=17314
	I1006 01:00:56.922853   74302 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 01:00:56.924315   74302 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17314-66550/kubeconfig
	I1006 01:00:56.925751   74302 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-66550/.minikube
	I1006 01:00:56.927182   74302 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 01:00:56.928419   74302 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 01:00:56.929793   74302 driver.go:378] Setting default libvirt URI to qemu:///system
	I1006 01:00:56.962902   74302 out.go:177] * Using the kvm2 driver based on user configuration
	I1006 01:00:56.964074   74302 start.go:298] selected driver: kvm2
	I1006 01:00:56.964084   74302 start.go:902] validating driver "kvm2" against <nil>
	I1006 01:00:56.964096   74302 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 01:00:56.964858   74302 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 01:00:56.964933   74302 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17314-66550/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1006 01:00:56.978844   74302 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1006 01:00:56.978894   74302 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1006 01:00:56.979089   74302 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 01:00:56.979154   74302 cni.go:84] Creating CNI manager for ""
	I1006 01:00:56.979167   74302 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1006 01:00:56.979177   74302 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1006 01:00:56.979185   74302 start_flags.go:323] config:
	{Name:addons-565340 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-565340 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1006 01:00:56.979317   74302 iso.go:125] acquiring lock: {Name:mk59b3e5fbcca8f5b6f4ff791dcd43d3ee60c748 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 01:00:56.981150   74302 out.go:177] * Starting control plane node addons-565340 in cluster addons-565340
	I1006 01:00:56.982433   74302 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime containerd
	I1006 01:00:56.982476   74302 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17314-66550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-containerd-overlay2-amd64.tar.lz4
	I1006 01:00:56.982489   74302 cache.go:57] Caching tarball of preloaded images
	I1006 01:00:56.982586   74302 preload.go:174] Found /home/jenkins/minikube-integration/17314-66550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1006 01:00:56.982601   74302 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on containerd
	I1006 01:00:56.983022   74302 profile.go:148] Saving config to /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/config.json ...
	I1006 01:00:56.983051   74302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/config.json: {Name:mka365cd1d4b84cf1960749456fc368a750f923f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 01:00:56.983194   74302 start.go:365] acquiring machines lock for addons-565340: {Name:mk56d85cff4d078c2b7afb28177592c6b74c7ea0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1006 01:00:56.983244   74302 start.go:369] acquired machines lock for "addons-565340" in 32.525µs
	I1006 01:00:56.983262   74302 start.go:93] Provisioning new machine with config: &{Name:addons-565340 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:addons-565340 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1006 01:00:56.983333   74302 start.go:125] createHost starting for "" (driver="kvm2")
	I1006 01:00:56.985010   74302 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1006 01:00:56.985148   74302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:00:56.985187   74302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:00:56.999168   74302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42873
	I1006 01:00:56.999664   74302 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:00:57.000335   74302 main.go:141] libmachine: Using API Version  1
	I1006 01:00:57.000356   74302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:00:57.000751   74302 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:00:57.000943   74302 main.go:141] libmachine: (addons-565340) Calling .GetMachineName
	I1006 01:00:57.001117   74302 main.go:141] libmachine: (addons-565340) Calling .DriverName
	I1006 01:00:57.001252   74302 start.go:159] libmachine.API.Create for "addons-565340" (driver="kvm2")
	I1006 01:00:57.001305   74302 client.go:168] LocalClient.Create starting
	I1006 01:00:57.001359   74302 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17314-66550/.minikube/certs/ca.pem
	I1006 01:00:57.132042   74302 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17314-66550/.minikube/certs/cert.pem
	I1006 01:00:57.303506   74302 main.go:141] libmachine: Running pre-create checks...
	I1006 01:00:57.303532   74302 main.go:141] libmachine: (addons-565340) Calling .PreCreateCheck
	I1006 01:00:57.304078   74302 main.go:141] libmachine: (addons-565340) Calling .GetConfigRaw
	I1006 01:00:57.304524   74302 main.go:141] libmachine: Creating machine...
	I1006 01:00:57.304543   74302 main.go:141] libmachine: (addons-565340) Calling .Create
	I1006 01:00:57.304717   74302 main.go:141] libmachine: (addons-565340) Creating KVM machine...
	I1006 01:00:57.306005   74302 main.go:141] libmachine: (addons-565340) DBG | found existing default KVM network
	I1006 01:00:57.307061   74302 main.go:141] libmachine: (addons-565340) DBG | I1006 01:00:57.306835   74323 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001478f0}
	I1006 01:00:57.312654   74302 main.go:141] libmachine: (addons-565340) DBG | trying to create private KVM network mk-addons-565340 192.168.39.0/24...
	I1006 01:00:57.381660   74302 main.go:141] libmachine: (addons-565340) DBG | private KVM network mk-addons-565340 192.168.39.0/24 created
	I1006 01:00:57.381696   74302 main.go:141] libmachine: (addons-565340) Setting up store path in /home/jenkins/minikube-integration/17314-66550/.minikube/machines/addons-565340 ...
	I1006 01:00:57.381710   74302 main.go:141] libmachine: (addons-565340) DBG | I1006 01:00:57.381610   74323 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17314-66550/.minikube
	I1006 01:00:57.381730   74302 main.go:141] libmachine: (addons-565340) Building disk image from file:///home/jenkins/minikube-integration/17314-66550/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I1006 01:00:57.381822   74302 main.go:141] libmachine: (addons-565340) Downloading /home/jenkins/minikube-integration/17314-66550/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17314-66550/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso...
	I1006 01:00:57.605746   74302 main.go:141] libmachine: (addons-565340) DBG | I1006 01:00:57.605624   74323 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17314-66550/.minikube/machines/addons-565340/id_rsa...
	I1006 01:00:57.820255   74302 main.go:141] libmachine: (addons-565340) DBG | I1006 01:00:57.820121   74323 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17314-66550/.minikube/machines/addons-565340/addons-565340.rawdisk...
	I1006 01:00:57.820302   74302 main.go:141] libmachine: (addons-565340) DBG | Writing magic tar header
	I1006 01:00:57.820320   74302 main.go:141] libmachine: (addons-565340) DBG | Writing SSH key tar header
	I1006 01:00:57.820340   74302 main.go:141] libmachine: (addons-565340) DBG | I1006 01:00:57.820234   74323 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17314-66550/.minikube/machines/addons-565340 ...
	I1006 01:00:57.820360   74302 main.go:141] libmachine: (addons-565340) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17314-66550/.minikube/machines/addons-565340
	I1006 01:00:57.820369   74302 main.go:141] libmachine: (addons-565340) Setting executable bit set on /home/jenkins/minikube-integration/17314-66550/.minikube/machines/addons-565340 (perms=drwx------)
	I1006 01:00:57.820384   74302 main.go:141] libmachine: (addons-565340) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17314-66550/.minikube/machines
	I1006 01:00:57.820427   74302 main.go:141] libmachine: (addons-565340) Setting executable bit set on /home/jenkins/minikube-integration/17314-66550/.minikube/machines (perms=drwxr-xr-x)
	I1006 01:00:57.820461   74302 main.go:141] libmachine: (addons-565340) Setting executable bit set on /home/jenkins/minikube-integration/17314-66550/.minikube (perms=drwxr-xr-x)
	I1006 01:00:57.820476   74302 main.go:141] libmachine: (addons-565340) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17314-66550/.minikube
	I1006 01:00:57.820491   74302 main.go:141] libmachine: (addons-565340) Setting executable bit set on /home/jenkins/minikube-integration/17314-66550 (perms=drwxrwxr-x)
	I1006 01:00:57.820507   74302 main.go:141] libmachine: (addons-565340) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1006 01:00:57.820521   74302 main.go:141] libmachine: (addons-565340) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17314-66550
	I1006 01:00:57.820539   74302 main.go:141] libmachine: (addons-565340) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1006 01:00:57.820548   74302 main.go:141] libmachine: (addons-565340) DBG | Checking permissions on dir: /home/jenkins
	I1006 01:00:57.820558   74302 main.go:141] libmachine: (addons-565340) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1006 01:00:57.820575   74302 main.go:141] libmachine: (addons-565340) Creating domain...
	I1006 01:00:57.820589   74302 main.go:141] libmachine: (addons-565340) DBG | Checking permissions on dir: /home
	I1006 01:00:57.820604   74302 main.go:141] libmachine: (addons-565340) DBG | Skipping /home - not owner
	I1006 01:00:57.821556   74302 main.go:141] libmachine: (addons-565340) define libvirt domain using xml: 
	I1006 01:00:57.821578   74302 main.go:141] libmachine: (addons-565340) <domain type='kvm'>
	I1006 01:00:57.821587   74302 main.go:141] libmachine: (addons-565340)   <name>addons-565340</name>
	I1006 01:00:57.821593   74302 main.go:141] libmachine: (addons-565340)   <memory unit='MiB'>4000</memory>
	I1006 01:00:57.821599   74302 main.go:141] libmachine: (addons-565340)   <vcpu>2</vcpu>
	I1006 01:00:57.821607   74302 main.go:141] libmachine: (addons-565340)   <features>
	I1006 01:00:57.821633   74302 main.go:141] libmachine: (addons-565340)     <acpi/>
	I1006 01:00:57.821655   74302 main.go:141] libmachine: (addons-565340)     <apic/>
	I1006 01:00:57.821669   74302 main.go:141] libmachine: (addons-565340)     <pae/>
	I1006 01:00:57.821681   74302 main.go:141] libmachine: (addons-565340)     
	I1006 01:00:57.821695   74302 main.go:141] libmachine: (addons-565340)   </features>
	I1006 01:00:57.821708   74302 main.go:141] libmachine: (addons-565340)   <cpu mode='host-passthrough'>
	I1006 01:00:57.821718   74302 main.go:141] libmachine: (addons-565340)   
	I1006 01:00:57.821753   74302 main.go:141] libmachine: (addons-565340)   </cpu>
	I1006 01:00:57.821771   74302 main.go:141] libmachine: (addons-565340)   <os>
	I1006 01:00:57.821787   74302 main.go:141] libmachine: (addons-565340)     <type>hvm</type>
	I1006 01:00:57.821802   74302 main.go:141] libmachine: (addons-565340)     <boot dev='cdrom'/>
	I1006 01:00:57.821819   74302 main.go:141] libmachine: (addons-565340)     <boot dev='hd'/>
	I1006 01:00:57.821834   74302 main.go:141] libmachine: (addons-565340)     <bootmenu enable='no'/>
	I1006 01:00:57.821850   74302 main.go:141] libmachine: (addons-565340)   </os>
	I1006 01:00:57.821864   74302 main.go:141] libmachine: (addons-565340)   <devices>
	I1006 01:00:57.821878   74302 main.go:141] libmachine: (addons-565340)     <disk type='file' device='cdrom'>
	I1006 01:00:57.821936   74302 main.go:141] libmachine: (addons-565340)       <source file='/home/jenkins/minikube-integration/17314-66550/.minikube/machines/addons-565340/boot2docker.iso'/>
	I1006 01:00:57.821967   74302 main.go:141] libmachine: (addons-565340)       <target dev='hdc' bus='scsi'/>
	I1006 01:00:57.821983   74302 main.go:141] libmachine: (addons-565340)       <readonly/>
	I1006 01:00:57.822000   74302 main.go:141] libmachine: (addons-565340)     </disk>
	I1006 01:00:57.822015   74302 main.go:141] libmachine: (addons-565340)     <disk type='file' device='disk'>
	I1006 01:00:57.822029   74302 main.go:141] libmachine: (addons-565340)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1006 01:00:57.822045   74302 main.go:141] libmachine: (addons-565340)       <source file='/home/jenkins/minikube-integration/17314-66550/.minikube/machines/addons-565340/addons-565340.rawdisk'/>
	I1006 01:00:57.822061   74302 main.go:141] libmachine: (addons-565340)       <target dev='hda' bus='virtio'/>
	I1006 01:00:57.822075   74302 main.go:141] libmachine: (addons-565340)     </disk>
	I1006 01:00:57.822092   74302 main.go:141] libmachine: (addons-565340)     <interface type='network'>
	I1006 01:00:57.822107   74302 main.go:141] libmachine: (addons-565340)       <source network='mk-addons-565340'/>
	I1006 01:00:57.822119   74302 main.go:141] libmachine: (addons-565340)       <model type='virtio'/>
	I1006 01:00:57.822131   74302 main.go:141] libmachine: (addons-565340)     </interface>
	I1006 01:00:57.822139   74302 main.go:141] libmachine: (addons-565340)     <interface type='network'>
	I1006 01:00:57.822154   74302 main.go:141] libmachine: (addons-565340)       <source network='default'/>
	I1006 01:00:57.822167   74302 main.go:141] libmachine: (addons-565340)       <model type='virtio'/>
	I1006 01:00:57.822177   74302 main.go:141] libmachine: (addons-565340)     </interface>
	I1006 01:00:57.822191   74302 main.go:141] libmachine: (addons-565340)     <serial type='pty'>
	I1006 01:00:57.822203   74302 main.go:141] libmachine: (addons-565340)       <target port='0'/>
	I1006 01:00:57.822216   74302 main.go:141] libmachine: (addons-565340)     </serial>
	I1006 01:00:57.822228   74302 main.go:141] libmachine: (addons-565340)     <console type='pty'>
	I1006 01:00:57.822238   74302 main.go:141] libmachine: (addons-565340)       <target type='serial' port='0'/>
	I1006 01:00:57.822250   74302 main.go:141] libmachine: (addons-565340)     </console>
	I1006 01:00:57.822264   74302 main.go:141] libmachine: (addons-565340)     <rng model='virtio'>
	I1006 01:00:57.822275   74302 main.go:141] libmachine: (addons-565340)       <backend model='random'>/dev/random</backend>
	I1006 01:00:57.822289   74302 main.go:141] libmachine: (addons-565340)     </rng>
	I1006 01:00:57.822300   74302 main.go:141] libmachine: (addons-565340)     
	I1006 01:00:57.822312   74302 main.go:141] libmachine: (addons-565340)     
	I1006 01:00:57.822336   74302 main.go:141] libmachine: (addons-565340)   </devices>
	I1006 01:00:57.822357   74302 main.go:141] libmachine: (addons-565340) </domain>
	I1006 01:00:57.822375   74302 main.go:141] libmachine: (addons-565340) 
	I1006 01:00:57.826426   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:a3:43:65 in network default
	I1006 01:00:57.827083   74302 main.go:141] libmachine: (addons-565340) Ensuring networks are active...
	I1006 01:00:57.827103   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:00:57.827761   74302 main.go:141] libmachine: (addons-565340) Ensuring network default is active
	I1006 01:00:57.828129   74302 main.go:141] libmachine: (addons-565340) Ensuring network mk-addons-565340 is active
	I1006 01:00:57.828601   74302 main.go:141] libmachine: (addons-565340) Getting domain xml...
	I1006 01:00:57.829309   74302 main.go:141] libmachine: (addons-565340) Creating domain...
	I1006 01:00:59.031381   74302 main.go:141] libmachine: (addons-565340) Waiting to get IP...
	I1006 01:00:59.032224   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:00:59.032613   74302 main.go:141] libmachine: (addons-565340) DBG | unable to find current IP address of domain addons-565340 in network mk-addons-565340
	I1006 01:00:59.032652   74302 main.go:141] libmachine: (addons-565340) DBG | I1006 01:00:59.032598   74323 retry.go:31] will retry after 289.622882ms: waiting for machine to come up
	I1006 01:00:59.324169   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:00:59.324620   74302 main.go:141] libmachine: (addons-565340) DBG | unable to find current IP address of domain addons-565340 in network mk-addons-565340
	I1006 01:00:59.324650   74302 main.go:141] libmachine: (addons-565340) DBG | I1006 01:00:59.324579   74323 retry.go:31] will retry after 258.015993ms: waiting for machine to come up
	I1006 01:00:59.584010   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:00:59.584409   74302 main.go:141] libmachine: (addons-565340) DBG | unable to find current IP address of domain addons-565340 in network mk-addons-565340
	I1006 01:00:59.584442   74302 main.go:141] libmachine: (addons-565340) DBG | I1006 01:00:59.584350   74323 retry.go:31] will retry after 395.637305ms: waiting for machine to come up
	I1006 01:00:59.981913   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:00:59.982369   74302 main.go:141] libmachine: (addons-565340) DBG | unable to find current IP address of domain addons-565340 in network mk-addons-565340
	I1006 01:00:59.982403   74302 main.go:141] libmachine: (addons-565340) DBG | I1006 01:00:59.982303   74323 retry.go:31] will retry after 548.903709ms: waiting for machine to come up
	I1006 01:01:00.532932   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:00.533292   74302 main.go:141] libmachine: (addons-565340) DBG | unable to find current IP address of domain addons-565340 in network mk-addons-565340
	I1006 01:01:00.533313   74302 main.go:141] libmachine: (addons-565340) DBG | I1006 01:01:00.533235   74323 retry.go:31] will retry after 576.70968ms: waiting for machine to come up
	I1006 01:01:01.111923   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:01.112445   74302 main.go:141] libmachine: (addons-565340) DBG | unable to find current IP address of domain addons-565340 in network mk-addons-565340
	I1006 01:01:01.112479   74302 main.go:141] libmachine: (addons-565340) DBG | I1006 01:01:01.112360   74323 retry.go:31] will retry after 808.622483ms: waiting for machine to come up
	I1006 01:01:01.922174   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:01.922581   74302 main.go:141] libmachine: (addons-565340) DBG | unable to find current IP address of domain addons-565340 in network mk-addons-565340
	I1006 01:01:01.922615   74302 main.go:141] libmachine: (addons-565340) DBG | I1006 01:01:01.922533   74323 retry.go:31] will retry after 955.314602ms: waiting for machine to come up
	I1006 01:01:02.878921   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:02.879361   74302 main.go:141] libmachine: (addons-565340) DBG | unable to find current IP address of domain addons-565340 in network mk-addons-565340
	I1006 01:01:02.879388   74302 main.go:141] libmachine: (addons-565340) DBG | I1006 01:01:02.879301   74323 retry.go:31] will retry after 1.049293512s: waiting for machine to come up
	I1006 01:01:03.930437   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:03.930873   74302 main.go:141] libmachine: (addons-565340) DBG | unable to find current IP address of domain addons-565340 in network mk-addons-565340
	I1006 01:01:03.930899   74302 main.go:141] libmachine: (addons-565340) DBG | I1006 01:01:03.930824   74323 retry.go:31] will retry after 1.590785083s: waiting for machine to come up
	I1006 01:01:05.522796   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:05.523205   74302 main.go:141] libmachine: (addons-565340) DBG | unable to find current IP address of domain addons-565340 in network mk-addons-565340
	I1006 01:01:05.523238   74302 main.go:141] libmachine: (addons-565340) DBG | I1006 01:01:05.523154   74323 retry.go:31] will retry after 1.686891524s: waiting for machine to come up
	I1006 01:01:07.211118   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:07.211513   74302 main.go:141] libmachine: (addons-565340) DBG | unable to find current IP address of domain addons-565340 in network mk-addons-565340
	I1006 01:01:07.211545   74302 main.go:141] libmachine: (addons-565340) DBG | I1006 01:01:07.211459   74323 retry.go:31] will retry after 2.074932024s: waiting for machine to come up
	I1006 01:01:09.287784   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:09.288216   74302 main.go:141] libmachine: (addons-565340) DBG | unable to find current IP address of domain addons-565340 in network mk-addons-565340
	I1006 01:01:09.288244   74302 main.go:141] libmachine: (addons-565340) DBG | I1006 01:01:09.288197   74323 retry.go:31] will retry after 3.028501877s: waiting for machine to come up
	I1006 01:01:12.318269   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:12.318658   74302 main.go:141] libmachine: (addons-565340) DBG | unable to find current IP address of domain addons-565340 in network mk-addons-565340
	I1006 01:01:12.318714   74302 main.go:141] libmachine: (addons-565340) DBG | I1006 01:01:12.318631   74323 retry.go:31] will retry after 3.929624073s: waiting for machine to come up
	I1006 01:01:16.250888   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:16.251333   74302 main.go:141] libmachine: (addons-565340) DBG | unable to find current IP address of domain addons-565340 in network mk-addons-565340
	I1006 01:01:16.251363   74302 main.go:141] libmachine: (addons-565340) DBG | I1006 01:01:16.251283   74323 retry.go:31] will retry after 3.454878621s: waiting for machine to come up
	I1006 01:01:19.707331   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:19.707752   74302 main.go:141] libmachine: (addons-565340) Found IP for machine: 192.168.39.147
	I1006 01:01:19.707777   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has current primary IP address 192.168.39.147 and MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:19.707786   74302 main.go:141] libmachine: (addons-565340) Reserving static IP address...
	I1006 01:01:19.708327   74302 main.go:141] libmachine: (addons-565340) DBG | unable to find host DHCP lease matching {name: "addons-565340", mac: "52:54:00:e8:8f:59", ip: "192.168.39.147"} in network mk-addons-565340
	I1006 01:01:19.778492   74302 main.go:141] libmachine: (addons-565340) Reserved static IP address: 192.168.39.147
	I1006 01:01:19.778528   74302 main.go:141] libmachine: (addons-565340) Waiting for SSH to be available...
	I1006 01:01:19.778540   74302 main.go:141] libmachine: (addons-565340) DBG | Getting to WaitForSSH function...
	I1006 01:01:19.781391   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:19.781912   74302 main.go:141] libmachine: (addons-565340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:8f:59", ip: ""} in network mk-addons-565340: {Iface:virbr1 ExpiryTime:2023-10-06 02:01:13 +0000 UTC Type:0 Mac:52:54:00:e8:8f:59 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e8:8f:59}
	I1006 01:01:19.781952   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined IP address 192.168.39.147 and MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:19.782100   74302 main.go:141] libmachine: (addons-565340) DBG | Using SSH client type: external
	I1006 01:01:19.782119   74302 main.go:141] libmachine: (addons-565340) DBG | Using SSH private key: /home/jenkins/minikube-integration/17314-66550/.minikube/machines/addons-565340/id_rsa (-rw-------)
	I1006 01:01:19.782143   74302 main.go:141] libmachine: (addons-565340) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.147 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17314-66550/.minikube/machines/addons-565340/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1006 01:01:19.782159   74302 main.go:141] libmachine: (addons-565340) DBG | About to run SSH command:
	I1006 01:01:19.782173   74302 main.go:141] libmachine: (addons-565340) DBG | exit 0
	I1006 01:01:19.873926   74302 main.go:141] libmachine: (addons-565340) DBG | SSH cmd err, output: <nil>: 
	I1006 01:01:19.874141   74302 main.go:141] libmachine: (addons-565340) KVM machine creation complete!
	I1006 01:01:19.874447   74302 main.go:141] libmachine: (addons-565340) Calling .GetConfigRaw
	I1006 01:01:19.875038   74302 main.go:141] libmachine: (addons-565340) Calling .DriverName
	I1006 01:01:19.875228   74302 main.go:141] libmachine: (addons-565340) Calling .DriverName
	I1006 01:01:19.875370   74302 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1006 01:01:19.875396   74302 main.go:141] libmachine: (addons-565340) Calling .GetState
	I1006 01:01:19.876674   74302 main.go:141] libmachine: Detecting operating system of created instance...
	I1006 01:01:19.876693   74302 main.go:141] libmachine: Waiting for SSH to be available...
	I1006 01:01:19.876702   74302 main.go:141] libmachine: Getting to WaitForSSH function...
	I1006 01:01:19.876712   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHHostname
	I1006 01:01:19.878978   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:19.879301   74302 main.go:141] libmachine: (addons-565340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:8f:59", ip: ""} in network mk-addons-565340: {Iface:virbr1 ExpiryTime:2023-10-06 02:01:13 +0000 UTC Type:0 Mac:52:54:00:e8:8f:59 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-565340 Clientid:01:52:54:00:e8:8f:59}
	I1006 01:01:19.879332   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined IP address 192.168.39.147 and MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:19.879443   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHPort
	I1006 01:01:19.879607   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHKeyPath
	I1006 01:01:19.879753   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHKeyPath
	I1006 01:01:19.879919   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHUsername
	I1006 01:01:19.880101   74302 main.go:141] libmachine: Using SSH client type: native
	I1006 01:01:19.880429   74302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I1006 01:01:19.880441   74302 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1006 01:01:19.997429   74302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 01:01:19.997456   74302 main.go:141] libmachine: Detecting the provisioner...
	I1006 01:01:19.997464   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHHostname
	I1006 01:01:20.000412   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:20.000776   74302 main.go:141] libmachine: (addons-565340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:8f:59", ip: ""} in network mk-addons-565340: {Iface:virbr1 ExpiryTime:2023-10-06 02:01:13 +0000 UTC Type:0 Mac:52:54:00:e8:8f:59 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-565340 Clientid:01:52:54:00:e8:8f:59}
	I1006 01:01:20.000808   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined IP address 192.168.39.147 and MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:20.001002   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHPort
	I1006 01:01:20.001207   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHKeyPath
	I1006 01:01:20.001388   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHKeyPath
	I1006 01:01:20.001529   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHUsername
	I1006 01:01:20.001698   74302 main.go:141] libmachine: Using SSH client type: native
	I1006 01:01:20.001997   74302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I1006 01:01:20.002008   74302 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1006 01:01:20.118860   74302 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb090841-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1006 01:01:20.118983   74302 main.go:141] libmachine: found compatible host: buildroot
	I1006 01:01:20.118995   74302 main.go:141] libmachine: Provisioning with buildroot...
	I1006 01:01:20.119007   74302 main.go:141] libmachine: (addons-565340) Calling .GetMachineName
	I1006 01:01:20.119297   74302 buildroot.go:166] provisioning hostname "addons-565340"
	I1006 01:01:20.119319   74302 main.go:141] libmachine: (addons-565340) Calling .GetMachineName
	I1006 01:01:20.119516   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHHostname
	I1006 01:01:20.122137   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:20.122514   74302 main.go:141] libmachine: (addons-565340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:8f:59", ip: ""} in network mk-addons-565340: {Iface:virbr1 ExpiryTime:2023-10-06 02:01:13 +0000 UTC Type:0 Mac:52:54:00:e8:8f:59 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-565340 Clientid:01:52:54:00:e8:8f:59}
	I1006 01:01:20.122547   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined IP address 192.168.39.147 and MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:20.122698   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHPort
	I1006 01:01:20.122875   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHKeyPath
	I1006 01:01:20.123017   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHKeyPath
	I1006 01:01:20.123196   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHUsername
	I1006 01:01:20.123371   74302 main.go:141] libmachine: Using SSH client type: native
	I1006 01:01:20.123686   74302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I1006 01:01:20.123703   74302 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-565340 && echo "addons-565340" | sudo tee /etc/hostname
	I1006 01:01:20.250173   74302 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-565340
	
	I1006 01:01:20.250207   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHHostname
	I1006 01:01:20.252896   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:20.253209   74302 main.go:141] libmachine: (addons-565340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:8f:59", ip: ""} in network mk-addons-565340: {Iface:virbr1 ExpiryTime:2023-10-06 02:01:13 +0000 UTC Type:0 Mac:52:54:00:e8:8f:59 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-565340 Clientid:01:52:54:00:e8:8f:59}
	I1006 01:01:20.253240   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined IP address 192.168.39.147 and MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:20.253432   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHPort
	I1006 01:01:20.253591   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHKeyPath
	I1006 01:01:20.253785   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHKeyPath
	I1006 01:01:20.253943   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHUsername
	I1006 01:01:20.254082   74302 main.go:141] libmachine: Using SSH client type: native
	I1006 01:01:20.254437   74302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I1006 01:01:20.254462   74302 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-565340' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-565340/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-565340' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 01:01:20.377971   74302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 01:01:20.378001   74302 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17314-66550/.minikube CaCertPath:/home/jenkins/minikube-integration/17314-66550/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17314-66550/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17314-66550/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17314-66550/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17314-66550/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17314-66550/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17314-66550/.minikube}
	I1006 01:01:20.378051   74302 buildroot.go:174] setting up certificates
	I1006 01:01:20.378061   74302 provision.go:83] configureAuth start
	I1006 01:01:20.378075   74302 main.go:141] libmachine: (addons-565340) Calling .GetMachineName
	I1006 01:01:20.378359   74302 main.go:141] libmachine: (addons-565340) Calling .GetIP
	I1006 01:01:20.380956   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:20.381299   74302 main.go:141] libmachine: (addons-565340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:8f:59", ip: ""} in network mk-addons-565340: {Iface:virbr1 ExpiryTime:2023-10-06 02:01:13 +0000 UTC Type:0 Mac:52:54:00:e8:8f:59 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-565340 Clientid:01:52:54:00:e8:8f:59}
	I1006 01:01:20.381332   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined IP address 192.168.39.147 and MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:20.381517   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHHostname
	I1006 01:01:20.383537   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:20.383866   74302 main.go:141] libmachine: (addons-565340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:8f:59", ip: ""} in network mk-addons-565340: {Iface:virbr1 ExpiryTime:2023-10-06 02:01:13 +0000 UTC Type:0 Mac:52:54:00:e8:8f:59 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-565340 Clientid:01:52:54:00:e8:8f:59}
	I1006 01:01:20.383884   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined IP address 192.168.39.147 and MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:20.383997   74302 provision.go:138] copyHostCerts
	I1006 01:01:20.384090   74302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17314-66550/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17314-66550/.minikube/ca.pem (1078 bytes)
	I1006 01:01:20.384193   74302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17314-66550/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17314-66550/.minikube/cert.pem (1123 bytes)
	I1006 01:01:20.384256   74302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17314-66550/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17314-66550/.minikube/key.pem (1675 bytes)
	I1006 01:01:20.384298   74302 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17314-66550/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17314-66550/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17314-66550/.minikube/certs/ca-key.pem org=jenkins.addons-565340 san=[192.168.39.147 192.168.39.147 localhost 127.0.0.1 minikube addons-565340]
	I1006 01:01:20.547818   74302 provision.go:172] copyRemoteCerts
	I1006 01:01:20.547883   74302 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 01:01:20.547910   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHHostname
	I1006 01:01:20.550562   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:20.550911   74302 main.go:141] libmachine: (addons-565340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:8f:59", ip: ""} in network mk-addons-565340: {Iface:virbr1 ExpiryTime:2023-10-06 02:01:13 +0000 UTC Type:0 Mac:52:54:00:e8:8f:59 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-565340 Clientid:01:52:54:00:e8:8f:59}
	I1006 01:01:20.550948   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined IP address 192.168.39.147 and MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:20.551096   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHPort
	I1006 01:01:20.551308   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHKeyPath
	I1006 01:01:20.551473   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHUsername
	I1006 01:01:20.551628   74302 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-66550/.minikube/machines/addons-565340/id_rsa Username:docker}
	I1006 01:01:20.639361   74302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-66550/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1006 01:01:20.660287   74302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-66550/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1006 01:01:20.680903   74302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-66550/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1006 01:01:20.701190   74302 provision.go:86] duration metric: configureAuth took 323.114146ms
	I1006 01:01:20.701210   74302 buildroot.go:189] setting minikube options for container-runtime
	I1006 01:01:20.701373   74302 config.go:182] Loaded profile config "addons-565340": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
	I1006 01:01:20.701396   74302 main.go:141] libmachine: Checking connection to Docker...
	I1006 01:01:20.701406   74302 main.go:141] libmachine: (addons-565340) Calling .GetURL
	I1006 01:01:20.702488   74302 main.go:141] libmachine: (addons-565340) DBG | Using libvirt version 6000000
	I1006 01:01:20.704274   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:20.704576   74302 main.go:141] libmachine: (addons-565340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:8f:59", ip: ""} in network mk-addons-565340: {Iface:virbr1 ExpiryTime:2023-10-06 02:01:13 +0000 UTC Type:0 Mac:52:54:00:e8:8f:59 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-565340 Clientid:01:52:54:00:e8:8f:59}
	I1006 01:01:20.704606   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined IP address 192.168.39.147 and MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:20.704749   74302 main.go:141] libmachine: Docker is up and running!
	I1006 01:01:20.704765   74302 main.go:141] libmachine: Reticulating splines...
	I1006 01:01:20.704773   74302 client.go:171] LocalClient.Create took 23.703455752s
	I1006 01:01:20.704797   74302 start.go:167] duration metric: libmachine.API.Create for "addons-565340" took 23.703546682s
	I1006 01:01:20.704807   74302 start.go:300] post-start starting for "addons-565340" (driver="kvm2")
	I1006 01:01:20.704817   74302 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 01:01:20.704839   74302 main.go:141] libmachine: (addons-565340) Calling .DriverName
	I1006 01:01:20.705038   74302 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 01:01:20.705057   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHHostname
	I1006 01:01:20.706881   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:20.707133   74302 main.go:141] libmachine: (addons-565340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:8f:59", ip: ""} in network mk-addons-565340: {Iface:virbr1 ExpiryTime:2023-10-06 02:01:13 +0000 UTC Type:0 Mac:52:54:00:e8:8f:59 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-565340 Clientid:01:52:54:00:e8:8f:59}
	I1006 01:01:20.707156   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined IP address 192.168.39.147 and MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:20.707326   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHPort
	I1006 01:01:20.707514   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHKeyPath
	I1006 01:01:20.707682   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHUsername
	I1006 01:01:20.707833   74302 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-66550/.minikube/machines/addons-565340/id_rsa Username:docker}
	I1006 01:01:20.797125   74302 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 01:01:20.801212   74302 info.go:137] Remote host: Buildroot 2021.02.12
	I1006 01:01:20.801268   74302 filesync.go:126] Scanning /home/jenkins/minikube-integration/17314-66550/.minikube/addons for local assets ...
	I1006 01:01:20.801342   74302 filesync.go:126] Scanning /home/jenkins/minikube-integration/17314-66550/.minikube/files for local assets ...
	I1006 01:01:20.801371   74302 start.go:303] post-start completed in 96.555835ms
	I1006 01:01:20.801403   74302 main.go:141] libmachine: (addons-565340) Calling .GetConfigRaw
	I1006 01:01:20.802000   74302 main.go:141] libmachine: (addons-565340) Calling .GetIP
	I1006 01:01:20.804288   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:20.804623   74302 main.go:141] libmachine: (addons-565340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:8f:59", ip: ""} in network mk-addons-565340: {Iface:virbr1 ExpiryTime:2023-10-06 02:01:13 +0000 UTC Type:0 Mac:52:54:00:e8:8f:59 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-565340 Clientid:01:52:54:00:e8:8f:59}
	I1006 01:01:20.804653   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined IP address 192.168.39.147 and MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:20.804850   74302 profile.go:148] Saving config to /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/config.json ...
	I1006 01:01:20.805023   74302 start.go:128] duration metric: createHost completed in 23.821679621s
	I1006 01:01:20.805047   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHHostname
	I1006 01:01:20.807251   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:20.807550   74302 main.go:141] libmachine: (addons-565340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:8f:59", ip: ""} in network mk-addons-565340: {Iface:virbr1 ExpiryTime:2023-10-06 02:01:13 +0000 UTC Type:0 Mac:52:54:00:e8:8f:59 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-565340 Clientid:01:52:54:00:e8:8f:59}
	I1006 01:01:20.807575   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined IP address 192.168.39.147 and MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:20.807686   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHPort
	I1006 01:01:20.807849   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHKeyPath
	I1006 01:01:20.807975   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHKeyPath
	I1006 01:01:20.808128   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHUsername
	I1006 01:01:20.808255   74302 main.go:141] libmachine: Using SSH client type: native
	I1006 01:01:20.808578   74302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I1006 01:01:20.808590   74302 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1006 01:01:20.926648   74302 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696554080.905027561
	
	I1006 01:01:20.926672   74302 fix.go:206] guest clock: 1696554080.905027561
	I1006 01:01:20.926679   74302 fix.go:219] Guest: 2023-10-06 01:01:20.905027561 +0000 UTC Remote: 2023-10-06 01:01:20.805036037 +0000 UTC m=+23.939897968 (delta=99.991524ms)
	I1006 01:01:20.926715   74302 fix.go:190] guest clock delta is within tolerance: 99.991524ms
	I1006 01:01:20.926723   74302 start.go:83] releasing machines lock for "addons-565340", held for 23.943469668s
	I1006 01:01:20.926747   74302 main.go:141] libmachine: (addons-565340) Calling .DriverName
	I1006 01:01:20.927003   74302 main.go:141] libmachine: (addons-565340) Calling .GetIP
	I1006 01:01:20.929395   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:20.929762   74302 main.go:141] libmachine: (addons-565340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:8f:59", ip: ""} in network mk-addons-565340: {Iface:virbr1 ExpiryTime:2023-10-06 02:01:13 +0000 UTC Type:0 Mac:52:54:00:e8:8f:59 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-565340 Clientid:01:52:54:00:e8:8f:59}
	I1006 01:01:20.929793   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined IP address 192.168.39.147 and MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:20.929960   74302 main.go:141] libmachine: (addons-565340) Calling .DriverName
	I1006 01:01:20.930384   74302 main.go:141] libmachine: (addons-565340) Calling .DriverName
	I1006 01:01:20.930563   74302 main.go:141] libmachine: (addons-565340) Calling .DriverName
	I1006 01:01:20.930658   74302 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 01:01:20.930715   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHHostname
	I1006 01:01:20.930797   74302 ssh_runner.go:195] Run: cat /version.json
	I1006 01:01:20.930829   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHHostname
	I1006 01:01:20.933247   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:20.933548   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:20.933610   74302 main.go:141] libmachine: (addons-565340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:8f:59", ip: ""} in network mk-addons-565340: {Iface:virbr1 ExpiryTime:2023-10-06 02:01:13 +0000 UTC Type:0 Mac:52:54:00:e8:8f:59 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-565340 Clientid:01:52:54:00:e8:8f:59}
	I1006 01:01:20.933636   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined IP address 192.168.39.147 and MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:20.933723   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHPort
	I1006 01:01:20.933852   74302 main.go:141] libmachine: (addons-565340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:8f:59", ip: ""} in network mk-addons-565340: {Iface:virbr1 ExpiryTime:2023-10-06 02:01:13 +0000 UTC Type:0 Mac:52:54:00:e8:8f:59 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-565340 Clientid:01:52:54:00:e8:8f:59}
	I1006 01:01:20.933881   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined IP address 192.168.39.147 and MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:20.933890   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHKeyPath
	I1006 01:01:20.934056   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHPort
	I1006 01:01:20.934062   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHUsername
	I1006 01:01:20.934219   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHKeyPath
	I1006 01:01:20.934214   74302 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-66550/.minikube/machines/addons-565340/id_rsa Username:docker}
	I1006 01:01:20.934390   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHUsername
	I1006 01:01:20.934502   74302 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-66550/.minikube/machines/addons-565340/id_rsa Username:docker}
	I1006 01:01:21.019057   74302 ssh_runner.go:195] Run: systemctl --version
	I1006 01:01:21.040105   74302 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 01:01:21.045172   74302 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 01:01:21.045238   74302 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 01:01:21.059197   74302 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1006 01:01:21.059213   74302 start.go:472] detecting cgroup driver to use...
	I1006 01:01:21.059273   74302 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1006 01:01:21.088846   74302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1006 01:01:21.101029   74302 docker.go:198] disabling cri-docker service (if available) ...
	I1006 01:01:21.101087   74302 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1006 01:01:21.113593   74302 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1006 01:01:21.126318   74302 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1006 01:01:21.237871   74302 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1006 01:01:21.355170   74302 docker.go:214] disabling docker service ...
	I1006 01:01:21.355237   74302 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1006 01:01:21.369064   74302 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1006 01:01:21.381354   74302 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1006 01:01:21.496164   74302 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1006 01:01:21.610779   74302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 01:01:21.623452   74302 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 01:01:21.640328   74302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1006 01:01:21.650134   74302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1006 01:01:21.659778   74302 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1006 01:01:21.659828   74302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1006 01:01:21.669892   74302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1006 01:01:21.679733   74302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1006 01:01:21.689368   74302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1006 01:01:21.699140   74302 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 01:01:21.708960   74302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1006 01:01:21.718565   74302 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 01:01:21.727328   74302 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1006 01:01:21.727381   74302 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1006 01:01:21.739551   74302 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 01:01:21.748330   74302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 01:01:21.861325   74302 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1006 01:01:21.891884   74302 start.go:519] Will wait 60s for socket path /run/containerd/containerd.sock
	I1006 01:01:21.891969   74302 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1006 01:01:21.896984   74302 retry.go:31] will retry after 678.974893ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1006 01:01:22.576359   74302 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1006 01:01:22.581445   74302 start.go:540] Will wait 60s for crictl version
	I1006 01:01:22.581525   74302 ssh_runner.go:195] Run: which crictl
	I1006 01:01:22.585014   74302 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1006 01:01:22.618877   74302 start.go:556] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.6
	RuntimeApiVersion:  v1
	I1006 01:01:22.618945   74302 ssh_runner.go:195] Run: containerd --version
	I1006 01:01:22.648761   74302 ssh_runner.go:195] Run: containerd --version
	I1006 01:01:22.678711   74302 out.go:177] * Preparing Kubernetes v1.28.2 on containerd 1.7.6 ...
	I1006 01:01:22.680117   74302 main.go:141] libmachine: (addons-565340) Calling .GetIP
	I1006 01:01:22.683007   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:22.683319   74302 main.go:141] libmachine: (addons-565340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:8f:59", ip: ""} in network mk-addons-565340: {Iface:virbr1 ExpiryTime:2023-10-06 02:01:13 +0000 UTC Type:0 Mac:52:54:00:e8:8f:59 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-565340 Clientid:01:52:54:00:e8:8f:59}
	I1006 01:01:22.683344   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined IP address 192.168.39.147 and MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:01:22.683580   74302 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1006 01:01:22.687605   74302 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 01:01:22.699821   74302 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime containerd
	I1006 01:01:22.699892   74302 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 01:01:22.732036   74302 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.2". assuming images are not preloaded.
	I1006 01:01:22.732100   74302 ssh_runner.go:195] Run: which lz4
	I1006 01:01:22.736005   74302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1006 01:01:22.739963   74302 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1006 01:01:22.739995   74302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-66550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (456662433 bytes)
	I1006 01:01:24.485902   74302 containerd.go:547] Took 1.749929 seconds to copy over tarball
	I1006 01:01:24.485982   74302 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1006 01:01:27.375260   74302 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.889248116s)
	I1006 01:01:27.375290   74302 containerd.go:554] Took 2.889367 seconds to extract the tarball
	I1006 01:01:27.375300   74302 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1006 01:01:27.417472   74302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 01:01:27.524500   74302 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1006 01:01:27.546039   74302 ssh_runner.go:195] Run: sudo crictl images --output json
	I1006 01:01:27.596649   74302 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.2 registry.k8s.io/kube-controller-manager:v1.28.2 registry.k8s.io/kube-scheduler:v1.28.2 registry.k8s.io/kube-proxy:v1.28.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1006 01:01:27.596780   74302 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.2
	I1006 01:01:27.596792   74302 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.2
	I1006 01:01:27.596849   74302 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1006 01:01:27.596858   74302 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.2
	I1006 01:01:27.596839   74302 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.2
	I1006 01:01:27.596789   74302 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 01:01:27.597093   74302 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1006 01:01:27.596793   74302 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1006 01:01:27.598344   74302 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1006 01:01:27.598356   74302 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.2
	I1006 01:01:27.598389   74302 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1006 01:01:27.598430   74302 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1006 01:01:27.598451   74302 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.2
	I1006 01:01:27.598512   74302 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.2
	I1006 01:01:27.598353   74302 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 01:01:27.598698   74302 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.2
	I1006 01:01:27.806458   74302 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-proxy:v1.28.2"
	I1006 01:01:27.853221   74302 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/pause:3.9"
	I1006 01:01:28.002994   74302 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/etcd:3.5.9-0"
	I1006 01:01:28.005893   74302 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-apiserver:v1.28.2"
	I1006 01:01:28.006059   74302 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/coredns/coredns:v1.10.1"
	I1006 01:01:28.006458   74302 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-scheduler:v1.28.2"
	I1006 01:01:28.009456   74302 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-controller-manager:v1.28.2"
	I1006 01:01:28.204833   74302 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.2" needs transfer: "registry.k8s.io/kube-proxy:v1.28.2" does not exist at hash "c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0" in container runtime
	I1006 01:01:28.204878   74302 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.2
	I1006 01:01:28.204931   74302 ssh_runner.go:195] Run: which crictl
	I1006 01:01:28.428380   74302 cache_images.go:116] "registry.k8s.io/pause:3.9" needs transfer: "registry.k8s.io/pause:3.9" does not exist at hash "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c" in container runtime
	I1006 01:01:28.428440   74302 cri.go:218] Removing image: registry.k8s.io/pause:3.9
	I1006 01:01:28.428508   74302 ssh_runner.go:195] Run: which crictl
	I1006 01:01:29.026457   74302 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/etcd:3.5.9-0": (1.023410637s)
	I1006 01:01:29.026498   74302 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I1006 01:01:29.026533   74302 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I1006 01:01:29.026586   74302 ssh_runner.go:195] Run: which crictl
	I1006 01:01:29.196515   74302 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-apiserver:v1.28.2": (1.190589742s)
	I1006 01:01:29.196559   74302 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.2" does not exist at hash "cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce" in container runtime
	I1006 01:01:29.196595   74302 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.2
	I1006 01:01:29.196644   74302 ssh_runner.go:195] Run: which crictl
	I1006 01:01:29.217236   74302 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/coredns/coredns:v1.10.1": (1.211143871s)
	I1006 01:01:29.217278   74302 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I1006 01:01:29.217315   74302 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I1006 01:01:29.217353   74302 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-scheduler:v1.28.2": (1.210862177s)
	I1006 01:01:29.217391   74302 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.2" does not exist at hash "7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8" in container runtime
	I1006 01:01:29.217360   74302 ssh_runner.go:195] Run: which crictl
	I1006 01:01:29.217422   74302 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.2
	I1006 01:01:29.217468   74302 ssh_runner.go:195] Run: which crictl
	I1006 01:01:29.223856   74302 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-controller-manager:v1.28.2": (1.214371922s)
	I1006 01:01:29.223902   74302 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.2" does not exist at hash "55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57" in container runtime
	I1006 01:01:29.223913   74302 ssh_runner.go:235] Completed: which crictl: (1.018954429s)
	I1006 01:01:29.223944   74302 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.2
	I1006 01:01:29.223980   74302 ssh_runner.go:195] Run: which crictl
	I1006 01:01:29.223982   74302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.2
	I1006 01:01:29.224023   74302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.9
	I1006 01:01:29.224056   74302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I1006 01:01:29.224128   74302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.2
	I1006 01:01:29.228138   74302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.2
	I1006 01:01:29.228176   74302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I1006 01:01:29.231831   74302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.2
	I1006 01:01:29.544328   74302 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I1006 01:01:29.653186   74302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17314-66550/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.2
	I1006 01:01:29.653227   74302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17314-66550/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9
	I1006 01:01:29.653268   74302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17314-66550/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.2
	I1006 01:01:29.653325   74302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17314-66550/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I1006 01:01:29.653364   74302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17314-66550/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.2
	I1006 01:01:29.653403   74302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17314-66550/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I1006 01:01:29.653453   74302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17314-66550/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.2
	I1006 01:01:29.653527   74302 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1006 01:01:29.653566   74302 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 01:01:29.653610   74302 ssh_runner.go:195] Run: which crictl
	I1006 01:01:29.657417   74302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 01:01:29.705951   74302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17314-66550/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1006 01:01:29.706029   74302 cache_images.go:92] LoadImages completed in 2.109353424s
	W1006 01:01:29.706130   74302 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17314-66550/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.2: no such file or directory
	I1006 01:01:29.706187   74302 ssh_runner.go:195] Run: sudo crictl info
	I1006 01:01:29.741378   74302 cni.go:84] Creating CNI manager for ""
	I1006 01:01:29.741409   74302 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1006 01:01:29.741431   74302 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1006 01:01:29.741448   74302 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.147 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-565340 NodeName:addons-565340 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 01:01:29.741571   74302 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-565340"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.147
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.147"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 01:01:29.741649   74302 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-565340 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:addons-565340 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1006 01:01:29.741708   74302 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1006 01:01:29.751246   74302 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 01:01:29.751322   74302 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 01:01:29.760205   74302 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I1006 01:01:29.775345   74302 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 01:01:29.790402   74302 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2108 bytes)
	I1006 01:01:29.805793   74302 ssh_runner.go:195] Run: grep 192.168.39.147	control-plane.minikube.internal$ /etc/hosts
	I1006 01:01:29.809569   74302 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.147	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 01:01:29.820702   74302 certs.go:56] Setting up /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340 for IP: 192.168.39.147
	I1006 01:01:29.820770   74302 certs.go:190] acquiring lock for shared ca certs: {Name:mk4c46b02b7dd4c73cceae9441293735524deb7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 01:01:29.820904   74302 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17314-66550/.minikube/ca.key
	I1006 01:01:29.942552   74302 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17314-66550/.minikube/ca.crt ...
	I1006 01:01:29.942582   74302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-66550/.minikube/ca.crt: {Name:mk176f1b1503aab8e5f9fce9961380e23138624c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 01:01:29.942755   74302 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17314-66550/.minikube/ca.key ...
	I1006 01:01:29.942766   74302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-66550/.minikube/ca.key: {Name:mkaf58fa9e6983d7aead951b6a085f1b80ae6a02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 01:01:29.942839   74302 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17314-66550/.minikube/proxy-client-ca.key
	I1006 01:01:30.176840   74302 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17314-66550/.minikube/proxy-client-ca.crt ...
	I1006 01:01:30.176873   74302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-66550/.minikube/proxy-client-ca.crt: {Name:mkc1e1f811ce6b78fbb14879beca30218ffbb9c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 01:01:30.177039   74302 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17314-66550/.minikube/proxy-client-ca.key ...
	I1006 01:01:30.177050   74302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-66550/.minikube/proxy-client-ca.key: {Name:mkf8b70aef4496ecff5676a3e282ee795b58b401 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 01:01:30.177149   74302 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/client.key
	I1006 01:01:30.177163   74302 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/client.crt with IP's: []
	I1006 01:01:30.256394   74302 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/client.crt ...
	I1006 01:01:30.256425   74302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/client.crt: {Name:mk069ad0aefcaab552b4143d771ecafe9857c97c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 01:01:30.256578   74302 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/client.key ...
	I1006 01:01:30.256588   74302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/client.key: {Name:mkac7d46722b67a40d606fa3d43ab89059d7a33a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 01:01:30.256658   74302 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/apiserver.key.a88b087d
	I1006 01:01:30.256675   74302 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/apiserver.crt.a88b087d with IP's: [192.168.39.147 10.96.0.1 127.0.0.1 10.0.0.1]
	I1006 01:01:30.505770   74302 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/apiserver.crt.a88b087d ...
	I1006 01:01:30.505801   74302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/apiserver.crt.a88b087d: {Name:mkc792c934225e68e2d9bbccdd8324e180924c56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 01:01:30.505959   74302 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/apiserver.key.a88b087d ...
	I1006 01:01:30.505971   74302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/apiserver.key.a88b087d: {Name:mk4e53a031d4c383273c10730ecaeb3c612bfa2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 01:01:30.506033   74302 certs.go:337] copying /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/apiserver.crt.a88b087d -> /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/apiserver.crt
	I1006 01:01:30.506103   74302 certs.go:341] copying /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/apiserver.key.a88b087d -> /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/apiserver.key
	I1006 01:01:30.506152   74302 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/proxy-client.key
	I1006 01:01:30.506168   74302 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/proxy-client.crt with IP's: []
	I1006 01:01:30.772417   74302 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/proxy-client.crt ...
	I1006 01:01:30.772448   74302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/proxy-client.crt: {Name:mk4e2592d732d5a5da42d79841a12feb7526b9bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 01:01:30.772595   74302 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/proxy-client.key ...
	I1006 01:01:30.772607   74302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/proxy-client.key: {Name:mkf16f61d8da3dc337d345e4c5e617c46df46042 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 01:01:30.772760   74302 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-66550/.minikube/certs/home/jenkins/minikube-integration/17314-66550/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 01:01:30.772798   74302 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-66550/.minikube/certs/home/jenkins/minikube-integration/17314-66550/.minikube/certs/ca.pem (1078 bytes)
	I1006 01:01:30.772822   74302 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-66550/.minikube/certs/home/jenkins/minikube-integration/17314-66550/.minikube/certs/cert.pem (1123 bytes)
	I1006 01:01:30.772847   74302 certs.go:437] found cert: /home/jenkins/minikube-integration/17314-66550/.minikube/certs/home/jenkins/minikube-integration/17314-66550/.minikube/certs/key.pem (1675 bytes)
	I1006 01:01:30.773457   74302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1006 01:01:30.796997   74302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1006 01:01:30.818945   74302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 01:01:30.840617   74302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1006 01:01:30.862338   74302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-66550/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 01:01:30.884182   74302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-66550/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1006 01:01:30.905535   74302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-66550/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 01:01:30.926979   74302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-66550/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1006 01:01:30.949546   74302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17314-66550/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 01:01:30.971363   74302 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 01:01:30.987029   74302 ssh_runner.go:195] Run: openssl version
	I1006 01:01:30.992462   74302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 01:01:31.003910   74302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 01:01:31.008757   74302 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  6 01:01 /usr/share/ca-certificates/minikubeCA.pem
	I1006 01:01:31.008830   74302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 01:01:31.014347   74302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 01:01:31.024977   74302 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1006 01:01:31.029491   74302 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1006 01:01:31.029558   74302 kubeadm.go:404] StartCluster: {Name:addons-565340 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.2 ClusterName:addons-565340 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1006 01:01:31.029627   74302 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1006 01:01:31.029658   74302 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1006 01:01:31.067172   74302 cri.go:89] found id: ""
	I1006 01:01:31.067229   74302 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 01:01:31.076852   74302 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 01:01:31.085410   74302 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 01:01:31.094801   74302 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 01:01:31.094851   74302 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1006 01:01:31.291593   74302 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 01:01:56.785969   74302 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1006 01:01:56.786033   74302 kubeadm.go:322] [preflight] Running pre-flight checks
	I1006 01:01:56.786122   74302 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 01:01:56.786267   74302 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 01:01:56.786419   74302 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1006 01:01:56.786503   74302 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 01:01:56.788047   74302 out.go:204]   - Generating certificates and keys ...
	I1006 01:01:56.788147   74302 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1006 01:01:56.788233   74302 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1006 01:01:56.788334   74302 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1006 01:01:56.788419   74302 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1006 01:01:56.788516   74302 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1006 01:01:56.788596   74302 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1006 01:01:56.788666   74302 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1006 01:01:56.788834   74302 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-565340 localhost] and IPs [192.168.39.147 127.0.0.1 ::1]
	I1006 01:01:56.788897   74302 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1006 01:01:56.789043   74302 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-565340 localhost] and IPs [192.168.39.147 127.0.0.1 ::1]
	I1006 01:01:56.789127   74302 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1006 01:01:56.789215   74302 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1006 01:01:56.789274   74302 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1006 01:01:56.789367   74302 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 01:01:56.789448   74302 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 01:01:56.789526   74302 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 01:01:56.789619   74302 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 01:01:56.789698   74302 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 01:01:56.789813   74302 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 01:01:56.789908   74302 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 01:01:56.791395   74302 out.go:204]   - Booting up control plane ...
	I1006 01:01:56.791503   74302 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 01:01:56.791600   74302 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 01:01:56.791693   74302 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 01:01:56.791831   74302 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 01:01:56.791950   74302 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 01:01:56.791989   74302 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1006 01:01:56.792137   74302 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1006 01:01:56.792234   74302 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.503095 seconds
	I1006 01:01:56.792357   74302 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1006 01:01:56.792521   74302 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1006 01:01:56.792601   74302 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1006 01:01:56.792789   74302 kubeadm.go:322] [mark-control-plane] Marking the node addons-565340 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1006 01:01:56.792861   74302 kubeadm.go:322] [bootstrap-token] Using token: q87ttd.0o85ujrm3i69ge20
	I1006 01:01:56.794160   74302 out.go:204]   - Configuring RBAC rules ...
	I1006 01:01:56.794259   74302 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1006 01:01:56.794365   74302 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1006 01:01:56.794487   74302 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1006 01:01:56.794592   74302 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1006 01:01:56.794741   74302 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1006 01:01:56.794855   74302 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1006 01:01:56.795015   74302 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1006 01:01:56.795069   74302 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1006 01:01:56.795132   74302 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1006 01:01:56.795140   74302 kubeadm.go:322] 
	I1006 01:01:56.795216   74302 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1006 01:01:56.795225   74302 kubeadm.go:322] 
	I1006 01:01:56.795320   74302 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1006 01:01:56.795330   74302 kubeadm.go:322] 
	I1006 01:01:56.795357   74302 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1006 01:01:56.795422   74302 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1006 01:01:56.795505   74302 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1006 01:01:56.795527   74302 kubeadm.go:322] 
	I1006 01:01:56.795601   74302 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1006 01:01:56.795611   74302 kubeadm.go:322] 
	I1006 01:01:56.795678   74302 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1006 01:01:56.795687   74302 kubeadm.go:322] 
	I1006 01:01:56.795763   74302 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1006 01:01:56.795867   74302 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1006 01:01:56.795950   74302 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1006 01:01:56.795960   74302 kubeadm.go:322] 
	I1006 01:01:56.796080   74302 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1006 01:01:56.796175   74302 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1006 01:01:56.796186   74302 kubeadm.go:322] 
	I1006 01:01:56.796298   74302 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token q87ttd.0o85ujrm3i69ge20 \
	I1006 01:01:56.796418   74302 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:49e6646e52219221a001c45d4e81369588c78883bbc7c69c6b50c5c846c5a786 \
	I1006 01:01:56.796446   74302 kubeadm.go:322] 	--control-plane 
	I1006 01:01:56.796451   74302 kubeadm.go:322] 
	I1006 01:01:56.796559   74302 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1006 01:01:56.796565   74302 kubeadm.go:322] 
	I1006 01:01:56.796650   74302 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token q87ttd.0o85ujrm3i69ge20 \
	I1006 01:01:56.796782   74302 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:49e6646e52219221a001c45d4e81369588c78883bbc7c69c6b50c5c846c5a786 
	I1006 01:01:56.796794   74302 cni.go:84] Creating CNI manager for ""
	I1006 01:01:56.796803   74302 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1006 01:01:56.798363   74302 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1006 01:01:56.799823   74302 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1006 01:01:56.809875   74302 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1006 01:01:56.840337   74302 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1006 01:01:56.840425   74302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=84890cb24d0240d9d992d7c7712ee519ceed4154 minikube.k8s.io/name=addons-565340 minikube.k8s.io/updated_at=2023_10_06T01_01_56_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 01:01:56.840426   74302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 01:01:57.060956   74302 ops.go:34] apiserver oom_adj: -16
	I1006 01:01:57.061211   74302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 01:01:57.185807   74302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 01:01:57.787607   74302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 01:01:58.287648   74302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 01:01:58.787542   74302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 01:01:59.287254   74302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 01:01:59.787249   74302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 01:02:00.287547   74302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 01:02:00.787340   74302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 01:02:01.287203   74302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 01:02:01.787180   74302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 01:02:02.287686   74302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 01:02:02.787523   74302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 01:02:03.286960   74302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 01:02:03.787575   74302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 01:02:04.287216   74302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 01:02:04.787687   74302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 01:02:05.287727   74302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 01:02:05.787563   74302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 01:02:06.287342   74302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 01:02:06.787760   74302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 01:02:07.287123   74302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 01:02:07.787111   74302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 01:02:08.287155   74302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 01:02:08.787312   74302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 01:02:09.287734   74302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 01:02:09.787754   74302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 01:02:09.906678   74302 kubeadm.go:1081] duration metric: took 13.066330206s to wait for elevateKubeSystemPrivileges.
	I1006 01:02:09.906707   74302 kubeadm.go:406] StartCluster complete in 38.877154556s
	I1006 01:02:09.906728   74302 settings.go:142] acquiring lock: {Name:mk2ef69c2deafd6aa21bc108f71dd631e3500d3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 01:02:09.906883   74302 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17314-66550/kubeconfig
	I1006 01:02:09.907343   74302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-66550/kubeconfig: {Name:mk6d7100d1d1f6341ad9b0ac1a52a8397218e9f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 01:02:09.907532   74302 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1006 01:02:09.907658   74302 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1006 01:02:09.907781   74302 addons.go:69] Setting volumesnapshots=true in profile "addons-565340"
	I1006 01:02:09.907825   74302 addons.go:231] Setting addon volumesnapshots=true in "addons-565340"
	I1006 01:02:09.907856   74302 config.go:182] Loaded profile config "addons-565340": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
	I1006 01:02:09.907899   74302 host.go:66] Checking if "addons-565340" exists ...
	I1006 01:02:09.907914   74302 addons.go:69] Setting ingress-dns=true in profile "addons-565340"
	I1006 01:02:09.907931   74302 addons.go:231] Setting addon ingress-dns=true in "addons-565340"
	I1006 01:02:09.907991   74302 host.go:66] Checking if "addons-565340" exists ...
	I1006 01:02:09.907998   74302 addons.go:69] Setting cloud-spanner=true in profile "addons-565340"
	I1006 01:02:09.908017   74302 addons.go:231] Setting addon cloud-spanner=true in "addons-565340"
	I1006 01:02:09.908013   74302 addons.go:69] Setting helm-tiller=true in profile "addons-565340"
	I1006 01:02:09.908052   74302 host.go:66] Checking if "addons-565340" exists ...
	I1006 01:02:09.908055   74302 addons.go:231] Setting addon helm-tiller=true in "addons-565340"
	I1006 01:02:09.908048   74302 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-565340"
	I1006 01:02:09.908049   74302 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-565340"
	I1006 01:02:09.908074   74302 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-565340"
	I1006 01:02:09.908102   74302 host.go:66] Checking if "addons-565340" exists ...
	I1006 01:02:09.908115   74302 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-565340"
	I1006 01:02:09.908129   74302 host.go:66] Checking if "addons-565340" exists ...
	I1006 01:02:09.908134   74302 addons.go:69] Setting gcp-auth=true in profile "addons-565340"
	I1006 01:02:09.908157   74302 host.go:66] Checking if "addons-565340" exists ...
	I1006 01:02:09.908167   74302 mustload.go:65] Loading cluster: addons-565340
	I1006 01:02:09.908372   74302 config.go:182] Loaded profile config "addons-565340": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
	I1006 01:02:09.908434   74302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:02:09.908441   74302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:02:09.908461   74302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:02:09.908472   74302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:02:09.908483   74302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:02:09.908501   74302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:02:09.908531   74302 addons.go:69] Setting inspektor-gadget=true in profile "addons-565340"
	I1006 01:02:09.908530   74302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:02:09.908543   74302 addons.go:231] Setting addon inspektor-gadget=true in "addons-565340"
	I1006 01:02:09.908555   74302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:02:09.908556   74302 addons.go:69] Setting ingress=true in profile "addons-565340"
	I1006 01:02:09.908570   74302 addons.go:231] Setting addon ingress=true in "addons-565340"
	I1006 01:02:09.908604   74302 addons.go:69] Setting metrics-server=true in profile "addons-565340"
	I1006 01:02:09.908616   74302 addons.go:231] Setting addon metrics-server=true in "addons-565340"
	I1006 01:02:09.908652   74302 host.go:66] Checking if "addons-565340" exists ...
	I1006 01:02:09.908667   74302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:02:09.908707   74302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:02:09.908710   74302 addons.go:69] Setting storage-provisioner=true in profile "addons-565340"
	I1006 01:02:09.908724   74302 addons.go:231] Setting addon storage-provisioner=true in "addons-565340"
	I1006 01:02:09.908760   74302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:02:09.908777   74302 host.go:66] Checking if "addons-565340" exists ...
	I1006 01:02:09.908783   74302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:02:09.908820   74302 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-565340"
	I1006 01:02:09.908906   74302 host.go:66] Checking if "addons-565340" exists ...
	I1006 01:02:09.908915   74302 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-565340"
	I1006 01:02:09.907991   74302 addons.go:69] Setting default-storageclass=true in profile "addons-565340"
	I1006 01:02:09.908975   74302 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-565340"
	I1006 01:02:09.908990   74302 addons.go:69] Setting registry=true in profile "addons-565340"
	I1006 01:02:09.909001   74302 addons.go:231] Setting addon registry=true in "addons-565340"
	I1006 01:02:09.909248   74302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:02:09.909273   74302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:02:09.909357   74302 host.go:66] Checking if "addons-565340" exists ...
	I1006 01:02:09.909509   74302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:02:09.909528   74302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:02:09.909545   74302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:02:09.909664   74302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:02:09.909681   74302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:02:09.909691   74302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:02:09.909698   74302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:02:09.909672   74302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:02:09.909804   74302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:02:09.909826   74302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:02:09.909894   74302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:02:09.909916   74302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:02:09.911206   74302 host.go:66] Checking if "addons-565340" exists ...
	I1006 01:02:09.911579   74302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:02:09.911608   74302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:02:09.926474   74302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39023
	I1006 01:02:09.926633   74302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43441
	I1006 01:02:09.926842   74302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41211
	I1006 01:02:09.926929   74302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32947
	I1006 01:02:09.927124   74302 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:02:09.927248   74302 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:02:09.927310   74302 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:02:09.927968   74302 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:02:09.928161   74302 main.go:141] libmachine: Using API Version  1
	I1006 01:02:09.928183   74302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:02:09.928330   74302 main.go:141] libmachine: Using API Version  1
	I1006 01:02:09.928342   74302 main.go:141] libmachine: Using API Version  1
	I1006 01:02:09.928354   74302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:02:09.928362   74302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:02:09.928783   74302 main.go:141] libmachine: Using API Version  1
	I1006 01:02:09.928801   74302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:02:09.928858   74302 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:02:09.928961   74302 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:02:09.929372   74302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:02:09.929389   74302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:02:09.929856   74302 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:02:09.930001   74302 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:02:09.930461   74302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:02:09.930493   74302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:02:09.930971   74302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:02:09.931011   74302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:02:09.931532   74302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:02:09.931556   74302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:02:09.949189   74302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32795
	I1006 01:02:09.949690   74302 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:02:09.950519   74302 main.go:141] libmachine: Using API Version  1
	I1006 01:02:09.950607   74302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:02:09.951056   74302 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:02:09.951295   74302 main.go:141] libmachine: (addons-565340) Calling .GetState
	I1006 01:02:09.953582   74302 main.go:141] libmachine: (addons-565340) Calling .DriverName
	I1006 01:02:09.955488   74302 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1006 01:02:09.956932   74302 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1006 01:02:09.958368   74302 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1006 01:02:09.959693   74302 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1006 01:02:09.961068   74302 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1006 01:02:09.962285   74302 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1006 01:02:09.963624   74302 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1006 01:02:09.964881   74302 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1006 01:02:09.966158   74302 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1006 01:02:09.966181   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1006 01:02:09.966208   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHHostname
	I1006 01:02:09.964169   74302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46207
	I1006 01:02:09.967331   74302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33197
	I1006 01:02:09.967622   74302 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:02:09.968157   74302 main.go:141] libmachine: Using API Version  1
	I1006 01:02:09.968183   74302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:02:09.968203   74302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41195
	I1006 01:02:09.968561   74302 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:02:09.968702   74302 main.go:141] libmachine: (addons-565340) Calling .GetState
	I1006 01:02:09.969598   74302 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:02:09.970402   74302 main.go:141] libmachine: Using API Version  1
	I1006 01:02:09.970424   74302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:02:09.970489   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:02:09.970523   74302 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:02:09.971157   74302 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:02:09.971345   74302 main.go:141] libmachine: (addons-565340) Calling .GetState
	I1006 01:02:09.971360   74302 main.go:141] libmachine: (addons-565340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:8f:59", ip: ""} in network mk-addons-565340: {Iface:virbr1 ExpiryTime:2023-10-06 02:01:13 +0000 UTC Type:0 Mac:52:54:00:e8:8f:59 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-565340 Clientid:01:52:54:00:e8:8f:59}
	I1006 01:02:09.971380   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined IP address 192.168.39.147 and MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:02:09.971849   74302 main.go:141] libmachine: (addons-565340) Calling .DriverName
	I1006 01:02:09.971924   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHPort
	I1006 01:02:09.972079   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHKeyPath
	I1006 01:02:09.973564   74302 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.1
	I1006 01:02:09.974626   74302 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-565340"
	I1006 01:02:09.975037   74302 host.go:66] Checking if "addons-565340" exists ...
	I1006 01:02:09.972400   74302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45801
	I1006 01:02:09.974677   74302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41667
	I1006 01:02:09.972257   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHUsername
	I1006 01:02:09.974993   74302 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1006 01:02:09.975280   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1006 01:02:09.975302   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHHostname
	I1006 01:02:09.975487   74302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:02:09.975528   74302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:02:09.978482   74302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41373
	I1006 01:02:09.978500   74302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33779
	I1006 01:02:09.978512   74302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44505
	I1006 01:02:09.978633   74302 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-66550/.minikube/machines/addons-565340/id_rsa Username:docker}
	I1006 01:02:09.978683   74302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40713
	I1006 01:02:09.978794   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:02:09.978949   74302 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:02:09.979042   74302 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:02:09.979106   74302 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:02:09.979165   74302 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:02:09.979218   74302 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:02:09.979230   74302 main.go:141] libmachine: (addons-565340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:8f:59", ip: ""} in network mk-addons-565340: {Iface:virbr1 ExpiryTime:2023-10-06 02:01:13 +0000 UTC Type:0 Mac:52:54:00:e8:8f:59 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-565340 Clientid:01:52:54:00:e8:8f:59}
	I1006 01:02:09.979248   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined IP address 192.168.39.147 and MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:02:09.979547   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHPort
	I1006 01:02:09.979718   74302 main.go:141] libmachine: Using API Version  1
	I1006 01:02:09.979731   74302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:02:09.979840   74302 main.go:141] libmachine: Using API Version  1
	I1006 01:02:09.979851   74302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:02:09.979975   74302 main.go:141] libmachine: Using API Version  1
	I1006 01:02:09.980001   74302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:02:09.980185   74302 main.go:141] libmachine: Using API Version  1
	I1006 01:02:09.980197   74302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:02:09.980247   74302 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:02:09.980291   74302 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:02:09.980444   74302 main.go:141] libmachine: Using API Version  1
	I1006 01:02:09.980462   74302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:02:09.980521   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHKeyPath
	I1006 01:02:09.980896   74302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:02:09.981217   74302 main.go:141] libmachine: (addons-565340) Calling .GetState
	I1006 01:02:09.981217   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHUsername
	I1006 01:02:09.981250   74302 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:02:09.981296   74302 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:02:09.981315   74302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:02:09.981821   74302 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-66550/.minikube/machines/addons-565340/id_rsa Username:docker}
	I1006 01:02:09.981954   74302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39503
	I1006 01:02:09.982054   74302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:02:09.982076   74302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:02:09.982534   74302 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:02:09.982616   74302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40533
	I1006 01:02:09.982735   74302 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:02:09.982888   74302 main.go:141] libmachine: (addons-565340) Calling .GetState
	I1006 01:02:09.982961   74302 main.go:141] libmachine: Using API Version  1
	I1006 01:02:09.982984   74302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:02:09.983044   74302 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:02:09.983120   74302 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:02:09.983237   74302 main.go:141] libmachine: Using API Version  1
	I1006 01:02:09.983250   74302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:02:09.983388   74302 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:02:09.983573   74302 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:02:09.983978   74302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:02:09.984018   74302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:02:09.984283   74302 main.go:141] libmachine: Using API Version  1
	I1006 01:02:09.984298   74302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:02:09.984633   74302 main.go:141] libmachine: (addons-565340) Calling .DriverName
	I1006 01:02:09.984661   74302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:02:09.984690   74302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:02:09.985052   74302 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:02:09.985143   74302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40827
	I1006 01:02:09.986800   74302 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1006 01:02:09.985345   74302 main.go:141] libmachine: Using API Version  1
	I1006 01:02:09.985392   74302 host.go:66] Checking if "addons-565340" exists ...
	I1006 01:02:09.985711   74302 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:02:09.985753   74302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:02:09.985953   74302 main.go:141] libmachine: (addons-565340) Calling .GetState
	I1006 01:02:09.988603   74302 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1006 01:02:09.988616   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1006 01:02:09.988636   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHHostname
	I1006 01:02:09.988689   74302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:02:09.989026   74302 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:02:09.989089   74302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:02:09.989121   74302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:02:09.989128   74302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:02:09.990008   74302 main.go:141] libmachine: Using API Version  1
	I1006 01:02:09.990030   74302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:02:09.990729   74302 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:02:09.990729   74302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:02:09.990806   74302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:02:09.991143   74302 main.go:141] libmachine: (addons-565340) Calling .GetState
	I1006 01:02:09.991894   74302 main.go:141] libmachine: (addons-565340) Calling .DriverName
	I1006 01:02:09.993413   74302 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1006 01:02:09.995015   74302 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1006 01:02:09.993375   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:02:09.995038   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1006 01:02:09.994109   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHPort
	I1006 01:02:09.995081   74302 main.go:141] libmachine: (addons-565340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:8f:59", ip: ""} in network mk-addons-565340: {Iface:virbr1 ExpiryTime:2023-10-06 02:01:13 +0000 UTC Type:0 Mac:52:54:00:e8:8f:59 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-565340 Clientid:01:52:54:00:e8:8f:59}
	I1006 01:02:09.994148   74302 addons.go:231] Setting addon default-storageclass=true in "addons-565340"
	I1006 01:02:09.995108   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHHostname
	I1006 01:02:09.995138   74302 host.go:66] Checking if "addons-565340" exists ...
	I1006 01:02:09.995174   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined IP address 192.168.39.147 and MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:02:09.995326   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHKeyPath
	I1006 01:02:09.995521   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHUsername
	I1006 01:02:09.995545   74302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:02:09.995577   74302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:02:09.995680   74302 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-66550/.minikube/machines/addons-565340/id_rsa Username:docker}
	I1006 01:02:09.999700   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:02:10.000319   74302 main.go:141] libmachine: (addons-565340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:8f:59", ip: ""} in network mk-addons-565340: {Iface:virbr1 ExpiryTime:2023-10-06 02:01:13 +0000 UTC Type:0 Mac:52:54:00:e8:8f:59 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-565340 Clientid:01:52:54:00:e8:8f:59}
	I1006 01:02:10.000361   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined IP address 192.168.39.147 and MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:02:10.000529   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHPort
	I1006 01:02:10.000684   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHKeyPath
	I1006 01:02:10.000836   74302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34107
	I1006 01:02:10.001117   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHUsername
	I1006 01:02:10.001248   74302 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-66550/.minikube/machines/addons-565340/id_rsa Username:docker}
	I1006 01:02:10.001295   74302 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:02:10.001853   74302 main.go:141] libmachine: Using API Version  1
	I1006 01:02:10.001876   74302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:02:10.002266   74302 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:02:10.002794   74302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:02:10.002837   74302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:02:10.004704   74302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45269
	I1006 01:02:10.005285   74302 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:02:10.005872   74302 main.go:141] libmachine: Using API Version  1
	I1006 01:02:10.005898   74302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:02:10.006268   74302 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:02:10.006474   74302 main.go:141] libmachine: (addons-565340) Calling .GetState
	I1006 01:02:10.008174   74302 main.go:141] libmachine: (addons-565340) Calling .DriverName
	I1006 01:02:10.010216   74302 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 01:02:10.011739   74302 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 01:02:10.011759   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 01:02:10.011778   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHHostname
	I1006 01:02:10.014395   74302 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-565340" context rescaled to 1 replicas
	I1006 01:02:10.014446   74302 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1006 01:02:10.016059   74302 out.go:177] * Verifying Kubernetes components...
	I1006 01:02:10.017478   74302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 01:02:10.015435   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:02:10.015999   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHPort
	I1006 01:02:10.017572   74302 main.go:141] libmachine: (addons-565340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:8f:59", ip: ""} in network mk-addons-565340: {Iface:virbr1 ExpiryTime:2023-10-06 02:01:13 +0000 UTC Type:0 Mac:52:54:00:e8:8f:59 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-565340 Clientid:01:52:54:00:e8:8f:59}
	I1006 01:02:10.017609   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined IP address 192.168.39.147 and MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:02:10.017767   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHKeyPath
	I1006 01:02:10.017895   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHUsername
	I1006 01:02:10.018006   74302 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-66550/.minikube/machines/addons-565340/id_rsa Username:docker}
	I1006 01:02:10.018724   74302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39343
	I1006 01:02:10.019267   74302 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:02:10.019900   74302 main.go:141] libmachine: Using API Version  1
	I1006 01:02:10.019919   74302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:02:10.020312   74302 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:02:10.020501   74302 main.go:141] libmachine: (addons-565340) Calling .DriverName
	I1006 01:02:10.024955   74302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43985
	I1006 01:02:10.026459   74302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37757
	I1006 01:02:10.026971   74302 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:02:10.027729   74302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41837
	I1006 01:02:10.027763   74302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39749
	I1006 01:02:10.028321   74302 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:02:10.028368   74302 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:02:10.028375   74302 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:02:10.028447   74302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32927
	I1006 01:02:10.028953   74302 main.go:141] libmachine: Using API Version  1
	I1006 01:02:10.028977   74302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:02:10.029026   74302 main.go:141] libmachine: Using API Version  1
	I1006 01:02:10.029048   74302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:02:10.029115   74302 main.go:141] libmachine: Using API Version  1
	I1006 01:02:10.029133   74302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:02:10.029176   74302 main.go:141] libmachine: Using API Version  1
	I1006 01:02:10.029187   74302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:02:10.029354   74302 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:02:10.029520   74302 main.go:141] libmachine: (addons-565340) Calling .GetState
	I1006 01:02:10.029580   74302 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:02:10.029620   74302 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:02:10.029651   74302 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:02:10.029717   74302 main.go:141] libmachine: (addons-565340) Calling .GetState
	I1006 01:02:10.030131   74302 main.go:141] libmachine: (addons-565340) Calling .GetState
	I1006 01:02:10.030839   74302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:02:10.030892   74302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:02:10.031608   74302 main.go:141] libmachine: (addons-565340) Calling .DriverName
	I1006 01:02:10.033493   74302 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.10
	I1006 01:02:10.032143   74302 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:02:10.032303   74302 main.go:141] libmachine: (addons-565340) Calling .DriverName
	I1006 01:02:10.032645   74302 main.go:141] libmachine: (addons-565340) Calling .DriverName
	I1006 01:02:10.034960   74302 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1006 01:02:10.034980   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1006 01:02:10.035000   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHHostname
	I1006 01:02:10.036557   74302 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.1
	I1006 01:02:10.035347   74302 main.go:141] libmachine: Using API Version  1
	I1006 01:02:10.038137   74302 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1006 01:02:10.038154   74302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:02:10.038186   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:02:10.038466   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHPort
	I1006 01:02:10.039382   74302 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1006 01:02:10.039070   74302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42545
	I1006 01:02:10.039478   74302 main.go:141] libmachine: (addons-565340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:8f:59", ip: ""} in network mk-addons-565340: {Iface:virbr1 ExpiryTime:2023-10-06 02:01:13 +0000 UTC Type:0 Mac:52:54:00:e8:8f:59 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-565340 Clientid:01:52:54:00:e8:8f:59}
	I1006 01:02:10.039553   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHKeyPath
	I1006 01:02:10.039885   74302 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:02:10.040304   74302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36167
	I1006 01:02:10.040710   74302 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1006 01:02:10.041086   74302 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:02:10.042020   74302 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1006 01:02:10.042042   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1006 01:02:10.042061   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHHostname
	I1006 01:02:10.042079   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined IP address 192.168.39.147 and MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:02:10.042217   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHUsername
	I1006 01:02:10.042244   74302 main.go:141] libmachine: (addons-565340) Calling .GetState
	I1006 01:02:10.042558   74302 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:02:10.043630   74302 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1006 01:02:10.042987   74302 main.go:141] libmachine: Using API Version  1
	I1006 01:02:10.043642   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I1006 01:02:10.043656   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHHostname
	I1006 01:02:10.043660   74302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:02:10.044726   74302 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:02:10.044789   74302 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-66550/.minikube/machines/addons-565340/id_rsa Username:docker}
	I1006 01:02:10.045810   74302 main.go:141] libmachine: (addons-565340) Calling .GetState
	I1006 01:02:10.045948   74302 main.go:141] libmachine: (addons-565340) Calling .DriverName
	I1006 01:02:10.047707   74302 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1006 01:02:10.049061   74302 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1006 01:02:10.049076   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1006 01:02:10.049090   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHHostname
	I1006 01:02:10.047434   74302 main.go:141] libmachine: Using API Version  1
	I1006 01:02:10.049176   74302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:02:10.047946   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:02:10.049256   74302 main.go:141] libmachine: (addons-565340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:8f:59", ip: ""} in network mk-addons-565340: {Iface:virbr1 ExpiryTime:2023-10-06 02:01:13 +0000 UTC Type:0 Mac:52:54:00:e8:8f:59 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-565340 Clientid:01:52:54:00:e8:8f:59}
	I1006 01:02:10.049285   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined IP address 192.168.39.147 and MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:02:10.048284   74302 main.go:141] libmachine: (addons-565340) Calling .DriverName
	I1006 01:02:10.048342   74302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41483
	I1006 01:02:10.048507   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHPort
	I1006 01:02:10.048923   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:02:10.050021   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHKeyPath
	I1006 01:02:10.050021   74302 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:02:10.051811   74302 out.go:177]   - Using image docker.io/registry:2.8.1
	I1006 01:02:10.050149   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHPort
	I1006 01:02:10.050178   74302 main.go:141] libmachine: (addons-565340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:8f:59", ip: ""} in network mk-addons-565340: {Iface:virbr1 ExpiryTime:2023-10-06 02:01:13 +0000 UTC Type:0 Mac:52:54:00:e8:8f:59 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-565340 Clientid:01:52:54:00:e8:8f:59}
	I1006 01:02:10.050248   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHUsername
	I1006 01:02:10.050298   74302 main.go:141] libmachine: (addons-565340) Calling .GetState
	I1006 01:02:10.050600   74302 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:02:10.051789   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:02:10.052216   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHPort
	I1006 01:02:10.054463   74302 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1006 01:02:10.055844   74302 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1006 01:02:10.054547   74302 main.go:141] libmachine: (addons-565340) Calling .DriverName
	I1006 01:02:10.053285   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined IP address 192.168.39.147 and MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:02:10.053347   74302 main.go:141] libmachine: (addons-565340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:8f:59", ip: ""} in network mk-addons-565340: {Iface:virbr1 ExpiryTime:2023-10-06 02:01:13 +0000 UTC Type:0 Mac:52:54:00:e8:8f:59 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-565340 Clientid:01:52:54:00:e8:8f:59}
	I1006 01:02:10.053382   74302 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-66550/.minikube/machines/addons-565340/id_rsa Username:docker}
	I1006 01:02:10.053459   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHKeyPath
	I1006 01:02:10.053782   74302 main.go:141] libmachine: Using API Version  1
	I1006 01:02:10.053255   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHKeyPath
	I1006 01:02:10.055948   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined IP address 192.168.39.147 and MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:02:10.055972   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1006 01:02:10.055988   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHHostname
	I1006 01:02:10.056048   74302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:02:10.056492   74302 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:02:10.056494   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHUsername
	I1006 01:02:10.056549   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHUsername
	I1006 01:02:10.058138   74302 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1006 01:02:10.056776   74302 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-66550/.minikube/machines/addons-565340/id_rsa Username:docker}
	I1006 01:02:10.056952   74302 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-66550/.minikube/machines/addons-565340/id_rsa Username:docker}
	I1006 01:02:10.057094   74302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:02:10.059044   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:02:10.060971   74302 out.go:177]   - Using image docker.io/busybox:stable
	I1006 01:02:10.059749   74302 main.go:141] libmachine: (addons-565340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:8f:59", ip: ""} in network mk-addons-565340: {Iface:virbr1 ExpiryTime:2023-10-06 02:01:13 +0000 UTC Type:0 Mac:52:54:00:e8:8f:59 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-565340 Clientid:01:52:54:00:e8:8f:59}
	I1006 01:02:10.059658   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHPort
	I1006 01:02:10.059515   74302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46081
	I1006 01:02:10.059802   74302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:02:10.062210   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined IP address 192.168.39.147 and MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:02:10.062385   74302 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1006 01:02:10.062404   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1006 01:02:10.062425   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHHostname
	I1006 01:02:10.062497   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHKeyPath
	I1006 01:02:10.062664   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHUsername
	I1006 01:02:10.062825   74302 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-66550/.minikube/machines/addons-565340/id_rsa Username:docker}
	I1006 01:02:10.065335   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:02:10.065694   74302 main.go:141] libmachine: (addons-565340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:8f:59", ip: ""} in network mk-addons-565340: {Iface:virbr1 ExpiryTime:2023-10-06 02:01:13 +0000 UTC Type:0 Mac:52:54:00:e8:8f:59 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-565340 Clientid:01:52:54:00:e8:8f:59}
	I1006 01:02:10.065722   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined IP address 192.168.39.147 and MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:02:10.065871   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHPort
	I1006 01:02:10.066085   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHKeyPath
	I1006 01:02:10.066230   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHUsername
	I1006 01:02:10.066396   74302 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-66550/.minikube/machines/addons-565340/id_rsa Username:docker}
	I1006 01:02:10.079139   74302 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:02:10.079691   74302 main.go:141] libmachine: Using API Version  1
	I1006 01:02:10.079718   74302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:02:10.080222   74302 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:02:10.080453   74302 main.go:141] libmachine: (addons-565340) Calling .GetState
	I1006 01:02:10.082473   74302 main.go:141] libmachine: (addons-565340) Calling .DriverName
	I1006 01:02:10.084543   74302 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.21.0
	I1006 01:02:10.085978   74302 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1006 01:02:10.085995   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1006 01:02:10.086018   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHHostname
	I1006 01:02:10.089208   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:02:10.089679   74302 main.go:141] libmachine: (addons-565340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:8f:59", ip: ""} in network mk-addons-565340: {Iface:virbr1 ExpiryTime:2023-10-06 02:01:13 +0000 UTC Type:0 Mac:52:54:00:e8:8f:59 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-565340 Clientid:01:52:54:00:e8:8f:59}
	I1006 01:02:10.089706   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined IP address 192.168.39.147 and MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:02:10.089884   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHPort
	I1006 01:02:10.090055   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHKeyPath
	I1006 01:02:10.090214   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHUsername
	I1006 01:02:10.090382   74302 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-66550/.minikube/machines/addons-565340/id_rsa Username:docker}
	I1006 01:02:10.095371   74302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42985
	I1006 01:02:10.095829   74302 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:02:10.096355   74302 main.go:141] libmachine: Using API Version  1
	I1006 01:02:10.096377   74302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:02:10.096671   74302 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:02:10.096841   74302 main.go:141] libmachine: (addons-565340) Calling .GetState
	I1006 01:02:10.098260   74302 main.go:141] libmachine: (addons-565340) Calling .DriverName
	I1006 01:02:10.098627   74302 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 01:02:10.098643   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 01:02:10.098656   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHHostname
	I1006 01:02:10.101538   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:02:10.101958   74302 main.go:141] libmachine: (addons-565340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:8f:59", ip: ""} in network mk-addons-565340: {Iface:virbr1 ExpiryTime:2023-10-06 02:01:13 +0000 UTC Type:0 Mac:52:54:00:e8:8f:59 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-565340 Clientid:01:52:54:00:e8:8f:59}
	I1006 01:02:10.101988   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined IP address 192.168.39.147 and MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:02:10.102135   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHPort
	I1006 01:02:10.102294   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHKeyPath
	I1006 01:02:10.102498   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHUsername
	I1006 01:02:10.102654   74302 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-66550/.minikube/machines/addons-565340/id_rsa Username:docker}
	I1006 01:02:10.458497   74302 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1006 01:02:10.458520   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1006 01:02:10.462577   74302 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1006 01:02:10.462597   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1006 01:02:10.468475   74302 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1006 01:02:10.468496   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1006 01:02:10.511371   74302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 01:02:10.535494   74302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1006 01:02:10.540947   74302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1006 01:02:10.543554   74302 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1006 01:02:10.544181   74302 node_ready.go:35] waiting up to 6m0s for node "addons-565340" to be "Ready" ...
	I1006 01:02:10.547257   74302 node_ready.go:49] node "addons-565340" has status "Ready":"True"
	I1006 01:02:10.547275   74302 node_ready.go:38] duration metric: took 3.072896ms waiting for node "addons-565340" to be "Ready" ...
	I1006 01:02:10.547284   74302 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1006 01:02:10.553706   74302 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-p8qsp" in "kube-system" namespace to be "Ready" ...
	I1006 01:02:10.563003   74302 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1006 01:02:10.563022   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1006 01:02:10.588222   74302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1006 01:02:10.649854   74302 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1006 01:02:10.649888   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1006 01:02:10.650281   74302 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1006 01:02:10.650299   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1006 01:02:10.710846   74302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1006 01:02:10.742396   74302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1006 01:02:10.748028   74302 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1006 01:02:10.748047   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1006 01:02:10.750421   74302 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1006 01:02:10.750440   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1006 01:02:10.752788   74302 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1006 01:02:10.752805   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1006 01:02:10.781798   74302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1006 01:02:10.798887   74302 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1006 01:02:10.798908   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1006 01:02:11.037569   74302 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1006 01:02:11.037592   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1006 01:02:11.049463   74302 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1006 01:02:11.049498   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1006 01:02:11.077441   74302 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1006 01:02:11.077469   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1006 01:02:11.105951   74302 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1006 01:02:11.105984   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1006 01:02:11.143009   74302 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1006 01:02:11.143041   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1006 01:02:11.159089   74302 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1006 01:02:11.159119   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1006 01:02:11.440858   74302 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1006 01:02:11.440881   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1006 01:02:11.527633   74302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1006 01:02:11.694206   74302 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1006 01:02:11.694243   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1006 01:02:11.726396   74302 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1006 01:02:11.726419   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1006 01:02:11.726443   74302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1006 01:02:11.772616   74302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1006 01:02:11.813754   74302 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1006 01:02:11.813784   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1006 01:02:11.923499   74302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1006 01:02:12.106815   74302 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1006 01:02:12.106835   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1006 01:02:12.109834   74302 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1006 01:02:12.109848   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1006 01:02:12.577746   74302 pod_ready.go:102] pod "coredns-5dd5756b68-p8qsp" in "kube-system" namespace has status "Ready":"False"
	I1006 01:02:12.581062   74302 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1006 01:02:12.581085   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1006 01:02:12.605565   74302 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1006 01:02:12.605586   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1006 01:02:12.890067   74302 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1006 01:02:12.890100   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1006 01:02:12.983910   74302 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1006 01:02:12.983945   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1006 01:02:13.262564   74302 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1006 01:02:13.262595   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1006 01:02:13.279840   74302 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1006 01:02:13.279864   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1006 01:02:13.518816   74302 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1006 01:02:13.518852   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1006 01:02:13.544540   74302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1006 01:02:13.590083   74302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1006 01:02:15.071149   74302 pod_ready.go:102] pod "coredns-5dd5756b68-p8qsp" in "kube-system" namespace has status "Ready":"False"
	I1006 01:02:16.628518   74302 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1006 01:02:16.628563   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHHostname
	I1006 01:02:16.631723   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:02:16.632143   74302 main.go:141] libmachine: (addons-565340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:8f:59", ip: ""} in network mk-addons-565340: {Iface:virbr1 ExpiryTime:2023-10-06 02:01:13 +0000 UTC Type:0 Mac:52:54:00:e8:8f:59 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-565340 Clientid:01:52:54:00:e8:8f:59}
	I1006 01:02:16.632168   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined IP address 192.168.39.147 and MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:02:16.632399   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHPort
	I1006 01:02:16.632597   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHKeyPath
	I1006 01:02:16.632731   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHUsername
	I1006 01:02:16.632851   74302 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-66550/.minikube/machines/addons-565340/id_rsa Username:docker}
	I1006 01:02:17.124882   74302 pod_ready.go:102] pod "coredns-5dd5756b68-p8qsp" in "kube-system" namespace has status "Ready":"False"
	I1006 01:02:17.160796   74302 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1006 01:02:17.452292   74302 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.940882376s)
	I1006 01:02:17.452353   74302 main.go:141] libmachine: Making call to close driver server
	I1006 01:02:17.452366   74302 main.go:141] libmachine: (addons-565340) Calling .Close
	I1006 01:02:17.452372   74302 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.916846253s)
	I1006 01:02:17.452416   74302 main.go:141] libmachine: Making call to close driver server
	I1006 01:02:17.452435   74302 main.go:141] libmachine: (addons-565340) Calling .Close
	I1006 01:02:17.452456   74302 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.911488869s)
	I1006 01:02:17.452475   74302 main.go:141] libmachine: Making call to close driver server
	I1006 01:02:17.452484   74302 main.go:141] libmachine: (addons-565340) Calling .Close
	I1006 01:02:17.452532   74302 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (6.908947547s)
	I1006 01:02:17.452557   74302 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1006 01:02:17.452587   74302 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.864315632s)
	I1006 01:02:17.452646   74302 main.go:141] libmachine: Making call to close driver server
	I1006 01:02:17.452688   74302 main.go:141] libmachine: (addons-565340) Calling .Close
	I1006 01:02:17.452787   74302 main.go:141] libmachine: (addons-565340) DBG | Closing plugin on server side
	I1006 01:02:17.452806   74302 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:02:17.452811   74302 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:02:17.452827   74302 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:02:17.452830   74302 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:02:17.452835   74302 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:02:17.452838   74302 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:02:17.452843   74302 main.go:141] libmachine: Making call to close driver server
	I1006 01:02:17.452849   74302 main.go:141] libmachine: Making call to close driver server
	I1006 01:02:17.452856   74302 main.go:141] libmachine: (addons-565340) Calling .Close
	I1006 01:02:17.452858   74302 main.go:141] libmachine: (addons-565340) Calling .Close
	I1006 01:02:17.453162   74302 main.go:141] libmachine: (addons-565340) DBG | Closing plugin on server side
	I1006 01:02:17.453196   74302 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:02:17.453212   74302 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:02:17.453218   74302 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:02:17.453224   74302 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:02:17.453269   74302 main.go:141] libmachine: (addons-565340) DBG | Closing plugin on server side
	I1006 01:02:17.452845   74302 main.go:141] libmachine: Making call to close driver server
	I1006 01:02:17.453314   74302 main.go:141] libmachine: (addons-565340) Calling .Close
	I1006 01:02:17.455178   74302 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:02:17.455194   74302 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:02:17.455204   74302 main.go:141] libmachine: Making call to close driver server
	I1006 01:02:17.455223   74302 main.go:141] libmachine: (addons-565340) DBG | Closing plugin on server side
	I1006 01:02:17.455239   74302 main.go:141] libmachine: (addons-565340) Calling .Close
	I1006 01:02:17.455244   74302 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:02:17.455259   74302 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:02:17.455475   74302 main.go:141] libmachine: (addons-565340) DBG | Closing plugin on server side
	I1006 01:02:17.455515   74302 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:02:17.455530   74302 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:02:17.495807   74302 addons.go:231] Setting addon gcp-auth=true in "addons-565340"
	I1006 01:02:17.495868   74302 host.go:66] Checking if "addons-565340" exists ...
	I1006 01:02:17.496156   74302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:02:17.496184   74302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:02:17.510399   74302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40207
	I1006 01:02:17.510826   74302 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:02:17.511310   74302 main.go:141] libmachine: Using API Version  1
	I1006 01:02:17.511334   74302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:02:17.511703   74302 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:02:17.512322   74302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:02:17.512365   74302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:02:17.526805   74302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39811
	I1006 01:02:17.527219   74302 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:02:17.527711   74302 main.go:141] libmachine: Using API Version  1
	I1006 01:02:17.527741   74302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:02:17.528107   74302 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:02:17.528444   74302 main.go:141] libmachine: (addons-565340) Calling .GetState
	I1006 01:02:17.530135   74302 main.go:141] libmachine: (addons-565340) Calling .DriverName
	I1006 01:02:17.530379   74302 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1006 01:02:17.530404   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHHostname
	I1006 01:02:17.533071   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:02:17.533559   74302 main.go:141] libmachine: (addons-565340) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:8f:59", ip: ""} in network mk-addons-565340: {Iface:virbr1 ExpiryTime:2023-10-06 02:01:13 +0000 UTC Type:0 Mac:52:54:00:e8:8f:59 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-565340 Clientid:01:52:54:00:e8:8f:59}
	I1006 01:02:17.533595   74302 main.go:141] libmachine: (addons-565340) DBG | domain addons-565340 has defined IP address 192.168.39.147 and MAC address 52:54:00:e8:8f:59 in network mk-addons-565340
	I1006 01:02:17.533729   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHPort
	I1006 01:02:17.533920   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHKeyPath
	I1006 01:02:17.534148   74302 main.go:141] libmachine: (addons-565340) Calling .GetSSHUsername
	I1006 01:02:17.534316   74302 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-66550/.minikube/machines/addons-565340/id_rsa Username:docker}
	I1006 01:02:17.850741   74302 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.139857558s)
	I1006 01:02:17.850787   74302 main.go:141] libmachine: Making call to close driver server
	I1006 01:02:17.850802   74302 main.go:141] libmachine: (addons-565340) Calling .Close
	I1006 01:02:17.850821   74302 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.108389487s)
	I1006 01:02:17.850868   74302 main.go:141] libmachine: Making call to close driver server
	I1006 01:02:17.850887   74302 main.go:141] libmachine: (addons-565340) Calling .Close
	I1006 01:02:17.851098   74302 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:02:17.851120   74302 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:02:17.851132   74302 main.go:141] libmachine: Making call to close driver server
	I1006 01:02:17.851143   74302 main.go:141] libmachine: (addons-565340) Calling .Close
	I1006 01:02:17.853591   74302 main.go:141] libmachine: (addons-565340) DBG | Closing plugin on server side
	I1006 01:02:17.853591   74302 main.go:141] libmachine: (addons-565340) DBG | Closing plugin on server side
	I1006 01:02:17.853604   74302 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:02:17.853611   74302 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:02:17.853624   74302 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:02:17.853627   74302 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:02:17.853636   74302 main.go:141] libmachine: Making call to close driver server
	I1006 01:02:17.853647   74302 main.go:141] libmachine: (addons-565340) Calling .Close
	I1006 01:02:17.853867   74302 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:02:17.853892   74302 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:02:17.853910   74302 main.go:141] libmachine: (addons-565340) DBG | Closing plugin on server side
	I1006 01:02:17.883685   74302 main.go:141] libmachine: Making call to close driver server
	I1006 01:02:17.883709   74302 main.go:141] libmachine: (addons-565340) Calling .Close
	I1006 01:02:17.884088   74302 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:02:17.884106   74302 main.go:141] libmachine: Making call to close connection to plugin binary
	W1006 01:02:17.884222   74302 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1006 01:02:17.899643   74302 main.go:141] libmachine: Making call to close driver server
	I1006 01:02:17.899671   74302 main.go:141] libmachine: (addons-565340) Calling .Close
	I1006 01:02:17.899955   74302 main.go:141] libmachine: (addons-565340) DBG | Closing plugin on server side
	I1006 01:02:17.899973   74302 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:02:17.899988   74302 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:02:19.574137   74302 pod_ready.go:102] pod "coredns-5dd5756b68-p8qsp" in "kube-system" namespace has status "Ready":"False"
	I1006 01:02:21.094241   74302 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (10.312396098s)
	I1006 01:02:21.094312   74302 main.go:141] libmachine: Making call to close driver server
	I1006 01:02:21.094344   74302 main.go:141] libmachine: (addons-565340) Calling .Close
	I1006 01:02:21.094371   74302 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.566693364s)
	I1006 01:02:21.094412   74302 main.go:141] libmachine: Making call to close driver server
	I1006 01:02:21.094418   74302 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (9.367950499s)
	I1006 01:02:21.094424   74302 main.go:141] libmachine: (addons-565340) Calling .Close
	I1006 01:02:21.094438   74302 main.go:141] libmachine: Making call to close driver server
	I1006 01:02:21.094448   74302 main.go:141] libmachine: (addons-565340) Calling .Close
	I1006 01:02:21.094465   74302 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.32181722s)
	I1006 01:02:21.094487   74302 main.go:141] libmachine: Making call to close driver server
	I1006 01:02:21.094499   74302 main.go:141] libmachine: (addons-565340) Calling .Close
	I1006 01:02:21.094583   74302 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.171052327s)
	W1006 01:02:21.094615   74302 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1006 01:02:21.094635   74302 retry.go:31] will retry after 228.008746ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1006 01:02:21.094706   74302 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.550126696s)
	I1006 01:02:21.094731   74302 main.go:141] libmachine: Making call to close driver server
	I1006 01:02:21.094743   74302 main.go:141] libmachine: (addons-565340) Calling .Close
	I1006 01:02:21.094800   74302 main.go:141] libmachine: (addons-565340) DBG | Closing plugin on server side
	I1006 01:02:21.094832   74302 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:02:21.094841   74302 main.go:141] libmachine: (addons-565340) DBG | Closing plugin on server side
	I1006 01:02:21.094845   74302 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:02:21.094865   74302 main.go:141] libmachine: Making call to close driver server
	I1006 01:02:21.094870   74302 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:02:21.094880   74302 main.go:141] libmachine: (addons-565340) Calling .Close
	I1006 01:02:21.094889   74302 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:02:21.094881   74302 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:02:21.094927   74302 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:02:21.094932   74302 main.go:141] libmachine: Making call to close driver server
	I1006 01:02:21.094939   74302 main.go:141] libmachine: Making call to close driver server
	I1006 01:02:21.094954   74302 main.go:141] libmachine: (addons-565340) Calling .Close
	I1006 01:02:21.095157   74302 main.go:141] libmachine: (addons-565340) DBG | Closing plugin on server side
	I1006 01:02:21.095162   74302 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:02:21.095180   74302 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:02:21.095184   74302 main.go:141] libmachine: (addons-565340) DBG | Closing plugin on server side
	I1006 01:02:21.095191   74302 main.go:141] libmachine: Making call to close driver server
	I1006 01:02:21.095196   74302 main.go:141] libmachine: (addons-565340) DBG | Closing plugin on server side
	I1006 01:02:21.095202   74302 main.go:141] libmachine: (addons-565340) Calling .Close
	I1006 01:02:21.095220   74302 main.go:141] libmachine: (addons-565340) DBG | Closing plugin on server side
	I1006 01:02:21.095235   74302 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:02:21.095245   74302 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:02:21.095254   74302 main.go:141] libmachine: Making call to close driver server
	I1006 01:02:21.095261   74302 main.go:141] libmachine: (addons-565340) Calling .Close
	I1006 01:02:21.095265   74302 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:02:21.095277   74302 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:02:21.095287   74302 addons.go:467] Verifying addon registry=true in "addons-565340"
	I1006 01:02:21.095338   74302 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:02:21.095365   74302 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:02:21.095400   74302 addons.go:467] Verifying addon metrics-server=true in "addons-565340"
	I1006 01:02:21.094941   74302 main.go:141] libmachine: (addons-565340) Calling .Close
	I1006 01:02:21.099385   74302 out.go:177] * Verifying registry addon...
	I1006 01:02:21.097060   74302 main.go:141] libmachine: (addons-565340) DBG | Closing plugin on server side
	I1006 01:02:21.097084   74302 main.go:141] libmachine: (addons-565340) DBG | Closing plugin on server side
	I1006 01:02:21.097092   74302 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:02:21.097116   74302 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:02:21.097470   74302 main.go:141] libmachine: (addons-565340) DBG | Closing plugin on server side
	I1006 01:02:21.097521   74302 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:02:21.100683   74302 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:02:21.100707   74302 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:02:21.100713   74302 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:02:21.100719   74302 addons.go:467] Verifying addon ingress=true in "addons-565340"
	I1006 01:02:21.102425   74302 out.go:177] * Verifying ingress addon...
	I1006 01:02:21.101546   74302 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1006 01:02:21.104754   74302 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1006 01:02:21.120900   74302 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1006 01:02:21.120928   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:21.129368   74302 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1006 01:02:21.129396   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:21.133415   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:21.149209   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:21.322798   74302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1006 01:02:21.639698   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:21.653324   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:22.074177   74302 pod_ready.go:102] pod "coredns-5dd5756b68-p8qsp" in "kube-system" namespace has status "Ready":"False"
	I1006 01:02:22.161809   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:22.163361   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:22.645708   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:22.655249   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:23.144909   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:23.155170   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:23.650267   74302 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (10.060114405s)
	I1006 01:02:23.650307   74302 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (6.11990586s)
	I1006 01:02:23.650316   74302 main.go:141] libmachine: Making call to close driver server
	I1006 01:02:23.650339   74302 main.go:141] libmachine: (addons-565340) Calling .Close
	I1006 01:02:23.651911   74302 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1006 01:02:23.650737   74302 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:02:23.650736   74302 main.go:141] libmachine: (addons-565340) DBG | Closing plugin on server side
	I1006 01:02:23.653343   74302 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:02:23.654676   74302 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1006 01:02:23.653368   74302 main.go:141] libmachine: Making call to close driver server
	I1006 01:02:23.654710   74302 main.go:141] libmachine: (addons-565340) Calling .Close
	I1006 01:02:23.654474   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:23.656030   74302 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1006 01:02:23.656047   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1006 01:02:23.654993   74302 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:02:23.656092   74302 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:02:23.656107   74302 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-565340"
	I1006 01:02:23.655015   74302 main.go:141] libmachine: (addons-565340) DBG | Closing plugin on server side
	I1006 01:02:23.657515   74302 out.go:177] * Verifying csi-hostpath-driver addon...
	I1006 01:02:23.659583   74302 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1006 01:02:23.670301   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:23.690319   74302 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1006 01:02:23.690362   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:23.710894   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:23.778506   74302 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1006 01:02:23.778531   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1006 01:02:23.850308   74302 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1006 01:02:23.850340   74302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I1006 01:02:23.913184   74302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1006 01:02:24.082847   74302 pod_ready.go:102] pod "coredns-5dd5756b68-p8qsp" in "kube-system" namespace has status "Ready":"False"
	I1006 01:02:24.140155   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:24.157919   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:24.219235   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:24.638158   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:24.663896   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:24.718514   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:24.833973   74302 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.511116455s)
	I1006 01:02:24.834028   74302 main.go:141] libmachine: Making call to close driver server
	I1006 01:02:24.834040   74302 main.go:141] libmachine: (addons-565340) Calling .Close
	I1006 01:02:24.834386   74302 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:02:24.834407   74302 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:02:24.834417   74302 main.go:141] libmachine: Making call to close driver server
	I1006 01:02:24.834427   74302 main.go:141] libmachine: (addons-565340) Calling .Close
	I1006 01:02:24.834433   74302 main.go:141] libmachine: (addons-565340) DBG | Closing plugin on server side
	I1006 01:02:24.834645   74302 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:02:24.834661   74302 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:02:25.139744   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:25.153893   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:25.217217   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:25.631963   74302 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.718727218s)
	I1006 01:02:25.632036   74302 main.go:141] libmachine: Making call to close driver server
	I1006 01:02:25.632053   74302 main.go:141] libmachine: (addons-565340) Calling .Close
	I1006 01:02:25.632387   74302 main.go:141] libmachine: (addons-565340) DBG | Closing plugin on server side
	I1006 01:02:25.632396   74302 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:02:25.632411   74302 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:02:25.632422   74302 main.go:141] libmachine: Making call to close driver server
	I1006 01:02:25.632432   74302 main.go:141] libmachine: (addons-565340) Calling .Close
	I1006 01:02:25.632697   74302 main.go:141] libmachine: Successfully made call to close driver server
	I1006 01:02:25.632718   74302 main.go:141] libmachine: Making call to close connection to plugin binary
	I1006 01:02:25.632719   74302 main.go:141] libmachine: (addons-565340) DBG | Closing plugin on server side
	I1006 01:02:25.633974   74302 addons.go:467] Verifying addon gcp-auth=true in "addons-565340"
	I1006 01:02:25.635453   74302 out.go:177] * Verifying gcp-auth addon...
	I1006 01:02:25.637417   74302 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1006 01:02:25.644576   74302 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1006 01:02:25.644592   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:25.644761   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:25.647146   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:25.653204   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:25.717154   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:26.138700   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:26.151829   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:26.154029   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:26.222441   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:26.571326   74302 pod_ready.go:102] pod "coredns-5dd5756b68-p8qsp" in "kube-system" namespace has status "Ready":"False"
	I1006 01:02:26.641133   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:26.650895   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:26.656492   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:26.716451   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:27.140101   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:27.150884   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:27.153835   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:27.216003   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:27.640894   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:27.651153   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:27.654894   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:27.718124   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:28.138899   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:28.150906   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:28.154543   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:28.217286   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:28.638096   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:28.651224   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:28.655035   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:28.716876   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:29.258840   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:29.259829   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:29.259891   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:29.261193   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:29.264559   74302 pod_ready.go:102] pod "coredns-5dd5756b68-p8qsp" in "kube-system" namespace has status "Ready":"False"
	I1006 01:02:29.638243   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:29.651088   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:29.653708   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:29.716947   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:30.139275   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:30.152635   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:30.154985   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:30.217260   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:30.639747   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:30.651008   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:30.653883   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:30.718660   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:31.139369   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:31.151167   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:31.154192   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:31.217734   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:31.571399   74302 pod_ready.go:102] pod "coredns-5dd5756b68-p8qsp" in "kube-system" namespace has status "Ready":"False"
	I1006 01:02:31.639151   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:31.651082   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:31.654564   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:31.716648   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:32.139543   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:32.151171   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:32.153136   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:32.217117   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:32.640605   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:32.650013   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:32.652715   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:32.716386   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:33.141898   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:33.157310   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:33.157474   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:33.217660   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:33.576350   74302 pod_ready.go:102] pod "coredns-5dd5756b68-p8qsp" in "kube-system" namespace has status "Ready":"False"
	I1006 01:02:33.639120   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:33.650936   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:33.654135   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:33.720236   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:34.139065   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:34.151465   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:34.154147   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:34.219422   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:34.640186   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:34.651809   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:34.654610   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:34.716565   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:35.138799   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:35.151239   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:35.153975   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:35.217126   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:35.640718   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:35.651116   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:35.656235   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:35.718207   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:36.073466   74302 pod_ready.go:102] pod "coredns-5dd5756b68-p8qsp" in "kube-system" namespace has status "Ready":"False"
	I1006 01:02:36.140909   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:36.151798   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:36.156313   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:36.219684   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:36.639064   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:36.651146   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:36.654009   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:36.719307   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:37.138057   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:37.151362   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:37.154390   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:37.217220   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:37.639873   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:37.650265   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:37.653553   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:37.716320   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:38.138574   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:38.152331   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:38.154731   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:38.219864   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:38.572313   74302 pod_ready.go:102] pod "coredns-5dd5756b68-p8qsp" in "kube-system" namespace has status "Ready":"False"
	I1006 01:02:38.638654   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:38.650411   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:38.653479   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:38.717196   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:39.139356   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:39.151254   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:39.153762   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:39.220586   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:39.639734   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:39.649995   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:39.652819   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:39.716790   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:40.140601   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:40.159966   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:40.160224   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:40.217289   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:40.641090   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:40.650552   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:40.653344   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:40.717667   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:41.072054   74302 pod_ready.go:102] pod "coredns-5dd5756b68-p8qsp" in "kube-system" namespace has status "Ready":"False"
	I1006 01:02:41.138740   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:41.150714   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:41.153843   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:41.219368   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:41.638505   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:41.651524   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:41.654493   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:41.716509   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:42.139521   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:42.152833   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:42.153659   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:42.217352   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:42.639195   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:42.651176   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:42.653869   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:42.717126   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:43.072396   74302 pod_ready.go:102] pod "coredns-5dd5756b68-p8qsp" in "kube-system" namespace has status "Ready":"False"
	I1006 01:02:43.140372   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:43.151214   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:43.154354   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:43.216847   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:43.641569   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:43.652380   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:43.654645   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:43.716751   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:44.139635   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:44.151645   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:44.156497   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:44.217629   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:44.639719   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:44.651652   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:44.658143   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:44.717473   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:45.138525   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:45.152013   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:45.153595   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:45.219767   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:45.571328   74302 pod_ready.go:102] pod "coredns-5dd5756b68-p8qsp" in "kube-system" namespace has status "Ready":"False"
	I1006 01:02:45.639039   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:45.650710   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:45.654725   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:45.717350   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:46.139755   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:46.150961   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:46.153887   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:46.216467   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:46.642650   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:46.653243   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:46.653748   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:46.723628   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:47.141377   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:47.151763   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:47.154775   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:47.216987   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:47.573129   74302 pod_ready.go:102] pod "coredns-5dd5756b68-p8qsp" in "kube-system" namespace has status "Ready":"False"
	I1006 01:02:47.638422   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:47.650878   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:47.654117   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:47.718184   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:48.138844   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:48.151032   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:48.154157   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:48.217674   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:48.638917   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:48.650701   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:48.653870   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:48.717298   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:49.139919   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:49.151162   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:49.154500   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:49.217104   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:49.638500   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:49.651624   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:49.653531   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:49.716875   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:50.071684   74302 pod_ready.go:102] pod "coredns-5dd5756b68-p8qsp" in "kube-system" namespace has status "Ready":"False"
	I1006 01:02:50.138233   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:50.151178   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:50.153903   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:50.220079   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:50.657521   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:50.669167   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:50.669198   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:50.719194   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:51.072380   74302 pod_ready.go:92] pod "coredns-5dd5756b68-p8qsp" in "kube-system" namespace has status "Ready":"True"
	I1006 01:02:51.072410   74302 pod_ready.go:81] duration metric: took 40.51868158s waiting for pod "coredns-5dd5756b68-p8qsp" in "kube-system" namespace to be "Ready" ...
	I1006 01:02:51.072424   74302 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-sp8js" in "kube-system" namespace to be "Ready" ...
	I1006 01:02:51.075359   74302 pod_ready.go:97] error getting pod "coredns-5dd5756b68-sp8js" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-sp8js" not found
	I1006 01:02:51.075385   74302 pod_ready.go:81] duration metric: took 2.952327ms waiting for pod "coredns-5dd5756b68-sp8js" in "kube-system" namespace to be "Ready" ...
	E1006 01:02:51.075398   74302 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-sp8js" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-sp8js" not found
	I1006 01:02:51.075407   74302 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-565340" in "kube-system" namespace to be "Ready" ...
	I1006 01:02:51.080327   74302 pod_ready.go:92] pod "etcd-addons-565340" in "kube-system" namespace has status "Ready":"True"
	I1006 01:02:51.080351   74302 pod_ready.go:81] duration metric: took 4.936283ms waiting for pod "etcd-addons-565340" in "kube-system" namespace to be "Ready" ...
	I1006 01:02:51.080365   74302 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-565340" in "kube-system" namespace to be "Ready" ...
	I1006 01:02:51.085520   74302 pod_ready.go:92] pod "kube-apiserver-addons-565340" in "kube-system" namespace has status "Ready":"True"
	I1006 01:02:51.085542   74302 pod_ready.go:81] duration metric: took 5.168507ms waiting for pod "kube-apiserver-addons-565340" in "kube-system" namespace to be "Ready" ...
	I1006 01:02:51.085555   74302 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-565340" in "kube-system" namespace to be "Ready" ...
	I1006 01:02:51.090297   74302 pod_ready.go:92] pod "kube-controller-manager-addons-565340" in "kube-system" namespace has status "Ready":"True"
	I1006 01:02:51.090317   74302 pod_ready.go:81] duration metric: took 4.753786ms waiting for pod "kube-controller-manager-addons-565340" in "kube-system" namespace to be "Ready" ...
	I1006 01:02:51.090345   74302 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4lr7l" in "kube-system" namespace to be "Ready" ...
	I1006 01:02:51.139337   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:51.151239   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:51.154957   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:51.217945   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:51.269795   74302 pod_ready.go:92] pod "kube-proxy-4lr7l" in "kube-system" namespace has status "Ready":"True"
	I1006 01:02:51.269824   74302 pod_ready.go:81] duration metric: took 179.465896ms waiting for pod "kube-proxy-4lr7l" in "kube-system" namespace to be "Ready" ...
	I1006 01:02:51.269839   74302 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-565340" in "kube-system" namespace to be "Ready" ...
	I1006 01:02:51.639689   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:51.651169   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:51.654223   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:51.668800   74302 pod_ready.go:92] pod "kube-scheduler-addons-565340" in "kube-system" namespace has status "Ready":"True"
	I1006 01:02:51.668821   74302 pod_ready.go:81] duration metric: took 398.973254ms waiting for pod "kube-scheduler-addons-565340" in "kube-system" namespace to be "Ready" ...
	I1006 01:02:51.668830   74302 pod_ready.go:38] duration metric: took 41.121536955s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1006 01:02:51.668853   74302 api_server.go:52] waiting for apiserver process to appear ...
	I1006 01:02:51.668906   74302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 01:02:51.689506   74302 api_server.go:72] duration metric: took 41.675020341s to wait for apiserver process to appear ...
	I1006 01:02:51.689527   74302 api_server.go:88] waiting for apiserver healthz status ...
	I1006 01:02:51.689543   74302 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I1006 01:02:51.694482   74302 api_server.go:279] https://192.168.39.147:8443/healthz returned 200:
	ok
	I1006 01:02:51.695696   74302 api_server.go:141] control plane version: v1.28.2
	I1006 01:02:51.695715   74302 api_server.go:131] duration metric: took 6.183752ms to wait for apiserver health ...
	I1006 01:02:51.695723   74302 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 01:02:51.717392   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:51.875103   74302 system_pods.go:59] 18 kube-system pods found
	I1006 01:02:51.875137   74302 system_pods.go:61] "coredns-5dd5756b68-p8qsp" [e4cc9b03-d850-408d-b143-94ff33b3d329] Running
	I1006 01:02:51.875146   74302 system_pods.go:61] "csi-hostpath-attacher-0" [cababe1f-872e-471d-93de-e0a777e88d9a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1006 01:02:51.875153   74302 system_pods.go:61] "csi-hostpath-resizer-0" [84a60f1b-4833-48b3-9027-0eaaba034b7d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1006 01:02:51.875161   74302 system_pods.go:61] "csi-hostpathplugin-mss6x" [c19807fe-90ed-4052-aa67-c767f7e01a56] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1006 01:02:51.875170   74302 system_pods.go:61] "etcd-addons-565340" [382931a4-94b3-4ef5-a6ee-5df8be996d60] Running
	I1006 01:02:51.875180   74302 system_pods.go:61] "kube-apiserver-addons-565340" [3152a949-1930-470d-ac9e-3e6ebdd623ce] Running
	I1006 01:02:51.875189   74302 system_pods.go:61] "kube-controller-manager-addons-565340" [e646fe06-9af5-4591-b568-e24ffd509ca5] Running
	I1006 01:02:51.875196   74302 system_pods.go:61] "kube-ingress-dns-minikube" [05401742-3255-46af-9049-d6e73e6ed34b] Running
	I1006 01:02:51.875206   74302 system_pods.go:61] "kube-proxy-4lr7l" [e9afac17-d67c-44a5-bb37-543bc75656f5] Running
	I1006 01:02:51.875215   74302 system_pods.go:61] "kube-scheduler-addons-565340" [e52df14a-4978-4435-94b1-5cf3669b310e] Running
	I1006 01:02:51.875222   74302 system_pods.go:61] "metrics-server-7c66d45ddc-sfzph" [450e3808-b6d6-43ee-9359-1ab47d897c47] Running
	I1006 01:02:51.875232   74302 system_pods.go:61] "nvidia-device-plugin-daemonset-dzf4h" [d76b8151-cfa3-4792-aaae-7bb5841e6485] Running
	I1006 01:02:51.875240   74302 system_pods.go:61] "registry-jc9w8" [70aa0cb1-3f28-45f9-9091-bd1c0832c6e6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1006 01:02:51.875247   74302 system_pods.go:61] "registry-proxy-446ct" [2914b09b-5bf8-49a0-8281-9f2cf66e5694] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1006 01:02:51.875251   74302 system_pods.go:61] "snapshot-controller-58dbcc7b99-7f9jv" [9c08e305-db2a-4917-b586-80885943876b] Running
	I1006 01:02:51.875256   74302 system_pods.go:61] "snapshot-controller-58dbcc7b99-fqqnt" [8661ee29-df45-4877-b01e-db47524382a7] Running
	I1006 01:02:51.875261   74302 system_pods.go:61] "storage-provisioner" [a1b59944-67e0-4ecf-a194-ce323b1ad9f0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1006 01:02:51.875275   74302 system_pods.go:61] "tiller-deploy-7b677967b9-m5swh" [ef8be544-3c52-471f-8c04-c11fb7a940fe] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1006 01:02:51.875283   74302 system_pods.go:74] duration metric: took 179.555055ms to wait for pod list to return data ...
	I1006 01:02:51.875295   74302 default_sa.go:34] waiting for default service account to be created ...
	I1006 01:02:52.069061   74302 default_sa.go:45] found service account: "default"
	I1006 01:02:52.069086   74302 default_sa.go:55] duration metric: took 193.784049ms for default service account to be created ...
	I1006 01:02:52.069098   74302 system_pods.go:116] waiting for k8s-apps to be running ...
	I1006 01:02:52.139546   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:52.151293   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:52.154306   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:52.217548   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:52.274503   74302 system_pods.go:86] 18 kube-system pods found
	I1006 01:02:52.274528   74302 system_pods.go:89] "coredns-5dd5756b68-p8qsp" [e4cc9b03-d850-408d-b143-94ff33b3d329] Running
	I1006 01:02:52.274538   74302 system_pods.go:89] "csi-hostpath-attacher-0" [cababe1f-872e-471d-93de-e0a777e88d9a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1006 01:02:52.274546   74302 system_pods.go:89] "csi-hostpath-resizer-0" [84a60f1b-4833-48b3-9027-0eaaba034b7d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1006 01:02:52.274554   74302 system_pods.go:89] "csi-hostpathplugin-mss6x" [c19807fe-90ed-4052-aa67-c767f7e01a56] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1006 01:02:52.274558   74302 system_pods.go:89] "etcd-addons-565340" [382931a4-94b3-4ef5-a6ee-5df8be996d60] Running
	I1006 01:02:52.274563   74302 system_pods.go:89] "kube-apiserver-addons-565340" [3152a949-1930-470d-ac9e-3e6ebdd623ce] Running
	I1006 01:02:52.274568   74302 system_pods.go:89] "kube-controller-manager-addons-565340" [e646fe06-9af5-4591-b568-e24ffd509ca5] Running
	I1006 01:02:52.274573   74302 system_pods.go:89] "kube-ingress-dns-minikube" [05401742-3255-46af-9049-d6e73e6ed34b] Running
	I1006 01:02:52.274580   74302 system_pods.go:89] "kube-proxy-4lr7l" [e9afac17-d67c-44a5-bb37-543bc75656f5] Running
	I1006 01:02:52.274584   74302 system_pods.go:89] "kube-scheduler-addons-565340" [e52df14a-4978-4435-94b1-5cf3669b310e] Running
	I1006 01:02:52.274591   74302 system_pods.go:89] "metrics-server-7c66d45ddc-sfzph" [450e3808-b6d6-43ee-9359-1ab47d897c47] Running
	I1006 01:02:52.274597   74302 system_pods.go:89] "nvidia-device-plugin-daemonset-dzf4h" [d76b8151-cfa3-4792-aaae-7bb5841e6485] Running
	I1006 01:02:52.274605   74302 system_pods.go:89] "registry-jc9w8" [70aa0cb1-3f28-45f9-9091-bd1c0832c6e6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1006 01:02:52.274610   74302 system_pods.go:89] "registry-proxy-446ct" [2914b09b-5bf8-49a0-8281-9f2cf66e5694] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1006 01:02:52.274615   74302 system_pods.go:89] "snapshot-controller-58dbcc7b99-7f9jv" [9c08e305-db2a-4917-b586-80885943876b] Running
	I1006 01:02:52.274619   74302 system_pods.go:89] "snapshot-controller-58dbcc7b99-fqqnt" [8661ee29-df45-4877-b01e-db47524382a7] Running
	I1006 01:02:52.274627   74302 system_pods.go:89] "storage-provisioner" [a1b59944-67e0-4ecf-a194-ce323b1ad9f0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1006 01:02:52.274633   74302 system_pods.go:89] "tiller-deploy-7b677967b9-m5swh" [ef8be544-3c52-471f-8c04-c11fb7a940fe] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1006 01:02:52.274642   74302 system_pods.go:126] duration metric: took 205.537796ms to wait for k8s-apps to be running ...
	I1006 01:02:52.274649   74302 system_svc.go:44] waiting for kubelet service to be running ....
	I1006 01:02:52.274699   74302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 01:02:52.290858   74302 system_svc.go:56] duration metric: took 16.196644ms WaitForService to wait for kubelet.
	I1006 01:02:52.290886   74302 kubeadm.go:581] duration metric: took 42.276406374s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1006 01:02:52.290905   74302 node_conditions.go:102] verifying NodePressure condition ...
	I1006 01:02:52.469746   74302 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1006 01:02:52.469817   74302 node_conditions.go:123] node cpu capacity is 2
	I1006 01:02:52.469835   74302 node_conditions.go:105] duration metric: took 178.924098ms to run NodePressure ...
	I1006 01:02:52.469850   74302 start.go:228] waiting for startup goroutines ...
	I1006 01:02:52.640837   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:52.658947   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:52.658977   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:52.717403   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:53.139133   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:53.151270   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:53.153892   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:53.221696   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:53.639451   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:53.651193   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:53.653612   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:53.723898   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:54.139127   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:54.152165   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:54.153885   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:54.216729   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:54.643777   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:54.650586   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:54.653418   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:54.716734   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:55.140367   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:55.151431   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:55.154399   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:55.218238   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:55.641025   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:55.651101   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:55.654912   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:55.721822   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:56.142463   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:56.152139   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:56.154377   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:56.217114   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:56.638871   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:56.657550   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:56.657869   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:56.716634   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:57.138275   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:57.152862   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:57.156644   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:57.216065   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:57.637984   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:57.652993   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:57.654916   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:57.717288   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:58.140240   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:58.152004   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:58.155640   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:58.216718   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:58.638117   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:58.651110   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:58.653496   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:58.716746   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:59.140739   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:59.151784   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:59.153588   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:59.215891   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:02:59.639929   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:02:59.650884   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:02:59.653596   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:02:59.715887   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:00.140186   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:00.151407   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:00.154765   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:00.217555   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:00.638494   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:00.650792   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:00.653580   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:00.715895   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:01.139596   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:01.153187   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:01.156985   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:01.216881   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:01.638669   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:01.650964   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:01.653608   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:01.719753   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:02.139668   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:02.152429   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:02.154237   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:02.218784   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:02.638159   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:02.651287   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:02.653739   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:02.718725   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:03.139492   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:03.151018   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:03.154179   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:03.216917   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:03.639463   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:03.651997   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:03.653962   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:03.717013   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:04.139553   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:04.150791   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:04.153507   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:04.216548   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:04.638914   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:04.650709   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:04.654261   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:04.716878   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:05.141756   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:05.151211   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:05.154855   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:05.217431   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:05.639231   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:05.651009   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:05.654231   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:05.717198   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:06.138639   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:06.151021   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:06.154109   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:06.216767   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:06.640260   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:06.653656   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:06.653998   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:06.724362   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:07.139138   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:07.151337   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:07.154043   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:07.217051   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:07.638984   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:07.651405   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:07.653916   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:07.718177   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:08.138183   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:08.151185   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:08.154415   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:08.221819   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:08.640083   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:08.652803   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:08.655256   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:08.717466   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:09.140185   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:09.151015   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:09.155085   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:09.217297   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:09.639279   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:09.651022   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:09.653978   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:09.717449   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:10.140111   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:10.151752   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:10.155085   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:10.221391   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:10.639343   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:10.651237   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:10.654766   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:10.718211   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:11.142342   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:11.152582   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:11.154214   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:11.224576   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:11.638254   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:11.651051   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:11.653906   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:11.716801   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:12.139007   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:12.150924   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:12.154509   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:12.217314   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:12.644835   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:12.652767   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:12.655420   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:12.717550   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:13.139390   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:13.151809   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:13.154123   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:13.217986   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:13.639084   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:13.650778   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:13.655356   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:13.717407   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:14.140059   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:14.151104   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:14.154149   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:14.217460   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:14.638908   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:14.651598   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:14.654830   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:14.717746   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:15.139338   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:15.151601   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:15.154492   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:15.218566   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:15.641280   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:15.652028   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:15.654108   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:15.717516   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:16.140637   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:16.150436   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:16.153118   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:16.216826   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:16.640410   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:16.651716   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:16.654522   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:16.721393   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:17.138651   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:17.150505   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:17.153516   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:17.216103   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:17.638141   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:17.650906   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:17.653797   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:17.717033   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:18.139272   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:18.154558   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:18.154885   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:18.217870   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:18.639622   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:18.651142   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:18.653723   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:18.717282   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:19.139736   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:19.150709   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:19.153969   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:19.221015   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:19.638627   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:19.650688   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:19.653450   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:19.717180   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:20.144266   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:20.151071   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:20.153933   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:20.218097   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:20.640738   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:20.650522   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:20.653272   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:20.717626   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:21.140123   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:21.150796   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:21.154050   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:21.216674   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:21.639289   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:21.650855   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:21.654253   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:21.717876   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:22.139257   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:22.152912   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:22.154985   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:22.217356   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:22.640096   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 01:03:22.651528   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:22.654957   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:22.718110   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:23.140451   74302 kapi.go:107] duration metric: took 1m2.038895875s to wait for kubernetes.io/minikube-addons=registry ...
	I1006 01:03:23.151483   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:23.154402   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:23.220723   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:23.651159   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:23.654084   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:23.717194   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:24.155101   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:24.155626   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:24.218866   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:24.651535   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:24.654849   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:24.717430   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:25.151343   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:25.155218   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:25.217076   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:25.650602   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:25.654253   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:25.720256   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:26.152357   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:26.155395   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:26.217963   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:26.651533   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:26.654063   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:26.717846   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:27.150844   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:27.153887   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:27.216709   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:27.651768   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:27.656324   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:27.717223   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:28.152141   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:28.155009   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:28.218012   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:28.651756   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:28.653910   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:28.717114   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:29.152711   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:29.155598   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:29.217526   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:29.651704   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:29.655488   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:29.722445   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:30.153601   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:30.155501   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:30.218489   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:30.652352   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:30.654687   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:30.721696   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:31.237500   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:31.239358   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:31.243291   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:31.651920   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:31.656466   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:31.716809   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:32.151302   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:32.154879   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:32.218215   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:32.650914   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:32.655162   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:32.716552   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:33.152033   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:33.156220   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:33.217855   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:33.651511   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:33.654652   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:33.717984   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:34.151004   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:34.154545   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:34.217668   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:34.651301   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:34.654680   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:34.717970   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:35.151780   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:35.154457   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:35.217792   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:35.651319   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:35.656572   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:35.719541   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:36.158291   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:36.158936   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:36.217731   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:36.650980   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:36.653316   74302 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 01:03:36.716722   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:37.154592   74302 kapi.go:107] duration metric: took 1m16.049830345s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1006 01:03:37.156038   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:37.218173   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:37.651680   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:37.716938   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:38.151779   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:38.217183   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:38.653793   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:38.744027   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:39.150905   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:39.217326   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:39.651903   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:39.717525   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:40.156408   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:40.225928   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:40.651090   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:40.717531   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:41.151807   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:41.216974   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:41.652397   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:41.717090   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:42.151730   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:42.217289   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:42.651444   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:42.722771   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:43.152982   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:43.219104   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:43.651839   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:43.717147   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 01:03:44.151408   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:44.217363   74302 kapi.go:107] duration metric: took 1m20.557774586s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1006 01:03:44.651864   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:45.153044   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:45.651031   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:46.150897   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:46.651857   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:47.152357   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:47.651786   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:48.151727   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:48.652599   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:49.151876   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:49.652042   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:50.151036   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:50.651592   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:51.152165   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:51.651555   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:52.151766   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:52.651388   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:53.153157   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:53.650926   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:54.151997   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:54.652042   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:55.151231   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:55.651884   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:56.152591   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:56.651134   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:57.150635   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:57.652406   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:58.151646   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:58.652515   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:59.152652   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:03:59.651491   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:00.152581   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:00.651665   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:01.152329   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:01.651784   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:02.151588   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:02.652280   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:03.151438   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:03.651629   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:04.152374   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:04.650727   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:05.151959   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:05.651937   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:06.153447   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:06.651557   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:07.151145   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:07.650983   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:08.151328   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:08.651102   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:09.151653   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:09.651415   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:10.151824   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:10.652599   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:11.151932   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:11.652206   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:12.155851   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:12.652560   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:13.151729   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:13.651968   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:14.152281   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:14.651135   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:15.151298   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:15.652552   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:16.151735   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:16.651850   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:17.152301   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:17.651211   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:18.151542   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:18.651389   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:19.151504   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:19.651066   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:20.152529   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:20.651074   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:21.150658   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:21.660109   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:22.151211   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:22.651762   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:23.151741   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:23.651654   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:24.152887   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:24.653482   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:25.151198   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:25.650993   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:26.151308   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:26.651139   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:27.152193   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:27.651163   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:28.151151   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:28.651200   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:29.151919   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:29.652146   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:30.151329   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:30.651193   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:31.151450   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:31.651642   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:32.151228   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:32.651107   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:33.151507   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:33.654297   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:34.152109   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:34.651845   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:35.152158   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:35.650706   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:36.151953   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:36.650819   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:37.152517   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:37.651866   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:38.152254   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:38.651282   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:39.151661   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:39.652165   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:40.151402   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:40.651350   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:41.151593   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:41.651245   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:42.152252   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:42.650701   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:43.151565   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:43.652259   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:44.152557   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:44.651008   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:45.152003   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:45.654240   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:46.151923   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:46.652860   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:47.151850   74302 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 01:04:47.651420   74302 kapi.go:107] duration metric: took 2m22.013997482s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1006 01:04:47.653206   74302 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-565340 cluster.
	I1006 01:04:47.654671   74302 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1006 01:04:47.656092   74302 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1006 01:04:47.657736   74302 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, ingress-dns, cloud-spanner, storage-provisioner-rancher, metrics-server, helm-tiller, inspektor-gadget, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1006 01:04:47.659020   74302 addons.go:502] enable addons completed in 2m37.751365251s: enabled=[nvidia-device-plugin storage-provisioner ingress-dns cloud-spanner storage-provisioner-rancher metrics-server helm-tiller inspektor-gadget volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1006 01:04:47.659060   74302 start.go:233] waiting for cluster config update ...
	I1006 01:04:47.659083   74302 start.go:242] writing updated cluster config ...
	I1006 01:04:47.659365   74302 ssh_runner.go:195] Run: rm -f paused
	I1006 01:04:47.713802   74302 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1006 01:04:47.715423   74302 out.go:177] * Done! kubectl is now configured to use "addons-565340" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	189dd696545b4       97e050c3e21e9       4 seconds ago        Running             hello-world-app                          0                   f7a17292fed51       hello-world-app-5d77478584-v2757
	c822729a60634       98f6c3b32d565       9 seconds ago        Exited              helm-test                                0                   dcff5cf7fabe3       helm-test
	e6c71c9413b93       d571254277f6a       17 seconds ago       Running             nginx                                    0                   4a147c03269e8       nginx
	82753d54ddabe       a416a98b71e22       34 seconds ago       Exited              helper-pod                               0                   0adcc93d931d3       helper-pod-delete-pvc-0eee7991-a6eb-4053-90a9-4fed6ee19f1f
	55c8dc32926f9       beae173ccac6a       35 seconds ago       Exited              registry-test                            0                   988a06494630a       registry-test
	52eb9078a7f6a       a416a98b71e22       37 seconds ago       Exited              busybox                                  0                   9c2debab30882       test-local-path
	a0f7fb526820a       a416a98b71e22       42 seconds ago       Exited              helper-pod                               0                   c3f47be56879d       helper-pod-create-pvc-0eee7991-a6eb-4053-90a9-4fed6ee19f1f
	13058ebe1f708       6d2a98b274382       49 seconds ago       Running             gcp-auth                                 0                   8118a4fdcefa3       gcp-auth-d4c87556c-fqkvf
	9e469c3ef356c       738351fd438f0       About a minute ago   Running             csi-snapshotter                          0                   1d5a7e668ba9e       csi-hostpathplugin-mss6x
	2a6de8adf503a       931dbfd16f87c       About a minute ago   Running             csi-provisioner                          0                   1d5a7e668ba9e       csi-hostpathplugin-mss6x
	41b9b18ebba0b       e899260153aed       About a minute ago   Running             liveness-probe                           0                   1d5a7e668ba9e       csi-hostpathplugin-mss6x
	ef951384e4ba7       e255e073c508c       About a minute ago   Running             hostpath                                 0                   1d5a7e668ba9e       csi-hostpathplugin-mss6x
	9272272d3fddb       88ef14a257f42       About a minute ago   Running             node-driver-registrar                    0                   1d5a7e668ba9e       csi-hostpathplugin-mss6x
	88c6f06234ae2       19a639eda60f0       2 minutes ago        Running             csi-resizer                              0                   bc59080eefdbe       csi-hostpath-resizer-0
	f5c20850de2d7       a1ed5895ba635       2 minutes ago        Running             csi-external-health-monitor-controller   0                   1d5a7e668ba9e       csi-hostpathplugin-mss6x
	1d05e541205d9       59cbb42146a37       2 minutes ago        Running             csi-attacher                             0                   2cc34d5bdfb65       csi-hostpath-attacher-0
	16dcdb4fda19c       7e7451bb70423       2 minutes ago        Exited              patch                                    0                   6b44d391321d5       ingress-nginx-admission-patch-t8b4p
	4e6a6f5180183       7e7451bb70423       2 minutes ago        Exited              create                                   0                   33df642b991bf       ingress-nginx-admission-create-hl7rd
	4381d8fa89f72       6e38f40d628db       2 minutes ago        Running             storage-provisioner                      0                   0b6f59b6ca1bd       storage-provisioner
	433f8cf74022d       aa61ee9c70bc4       2 minutes ago        Running             volume-snapshot-controller               0                   cd6d549340500       snapshot-controller-58dbcc7b99-7f9jv
	f1514a7c6543a       1499ed4fbd0aa       2 minutes ago        Running             minikube-ingress-dns                     0                   877b5168494ac       kube-ingress-dns-minikube
	0ab9501ba3a1f       aa61ee9c70bc4       2 minutes ago        Running             volume-snapshot-controller               0                   045ad8367147e       snapshot-controller-58dbcc7b99-fqqnt
	368f71dbcb2d3       ead0a4a53df89       3 minutes ago        Running             coredns                                  0                   81d6b6a67f0a1       coredns-5dd5756b68-p8qsp
	43ba6e202cd6e       c120fed2beb84       3 minutes ago        Running             kube-proxy                               0                   d4dc719ce60dd       kube-proxy-4lr7l
	14329d92c3287       73deb9a3f7025       3 minutes ago        Running             etcd                                     0                   3ed691b89550f       etcd-addons-565340
	c1c8515aedefb       7a5d9d67a13f6       3 minutes ago        Running             kube-scheduler                           0                   af4a86ee0198d       kube-scheduler-addons-565340
	7052bbc0ce89a       55f13c92defb1       3 minutes ago        Running             kube-controller-manager                  0                   370e7d7fcd749       kube-controller-manager-addons-565340
	b8d31b5e3d619       cdcab12b2dd16       3 minutes ago        Running             kube-apiserver                           0                   a571444024c54       kube-apiserver-addons-565340
	
	* 
	* ==> containerd <==
	* -- Journal begins at Fri 2023-10-06 01:01:09 UTC, ends at Fri 2023-10-06 01:05:36 UTC. --
	Oct 06 01:05:32 addons-565340 containerd[689]: time="2023-10-06T01:05:32.292085247Z" level=info msg="Container to stop \"adb690399fcaa11ba8d50a18697cf62d171f366b5bb931b383254af2d9dd671d\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Oct 06 01:05:32 addons-565340 containerd[689]: time="2023-10-06T01:05:32.337654232Z" level=info msg="shim disconnected" id=cddcd7942121655521eb6a27222aa62de005bbdcdb48eb0e60c522a0488aa986 namespace=k8s.io
	Oct 06 01:05:32 addons-565340 containerd[689]: time="2023-10-06T01:05:32.337724133Z" level=warning msg="cleaning up after shim disconnected" id=cddcd7942121655521eb6a27222aa62de005bbdcdb48eb0e60c522a0488aa986 namespace=k8s.io
	Oct 06 01:05:32 addons-565340 containerd[689]: time="2023-10-06T01:05:32.337735915Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 06 01:05:32 addons-565340 containerd[689]: time="2023-10-06T01:05:32.396220250Z" level=info msg="TearDown network for sandbox \"cddcd7942121655521eb6a27222aa62de005bbdcdb48eb0e60c522a0488aa986\" successfully"
	Oct 06 01:05:32 addons-565340 containerd[689]: time="2023-10-06T01:05:32.396282645Z" level=info msg="StopPodSandbox for \"cddcd7942121655521eb6a27222aa62de005bbdcdb48eb0e60c522a0488aa986\" returns successfully"
	Oct 06 01:05:32 addons-565340 containerd[689]: time="2023-10-06T01:05:32.862653204Z" level=info msg="RemoveContainer for \"adb690399fcaa11ba8d50a18697cf62d171f366b5bb931b383254af2d9dd671d\""
	Oct 06 01:05:32 addons-565340 containerd[689]: time="2023-10-06T01:05:32.879174038Z" level=info msg="RemoveContainer for \"adb690399fcaa11ba8d50a18697cf62d171f366b5bb931b383254af2d9dd671d\" returns successfully"
	Oct 06 01:05:32 addons-565340 containerd[689]: time="2023-10-06T01:05:32.879931564Z" level=error msg="ContainerStatus for \"adb690399fcaa11ba8d50a18697cf62d171f366b5bb931b383254af2d9dd671d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"adb690399fcaa11ba8d50a18697cf62d171f366b5bb931b383254af2d9dd671d\": not found"
	Oct 06 01:05:35 addons-565340 containerd[689]: time="2023-10-06T01:05:35.597135853Z" level=info msg="StopContainer for \"5f8826e60ab3613466c8ca867154fe39b3a348334de63c457b1865b7d01cd0db\" with timeout 30 (s)"
	Oct 06 01:05:35 addons-565340 containerd[689]: time="2023-10-06T01:05:35.598912894Z" level=info msg="Stop container \"5f8826e60ab3613466c8ca867154fe39b3a348334de63c457b1865b7d01cd0db\" with signal quit"
	Oct 06 01:05:35 addons-565340 containerd[689]: time="2023-10-06T01:05:35.663467649Z" level=info msg="shim disconnected" id=5f8826e60ab3613466c8ca867154fe39b3a348334de63c457b1865b7d01cd0db namespace=k8s.io
	Oct 06 01:05:35 addons-565340 containerd[689]: time="2023-10-06T01:05:35.663540613Z" level=warning msg="cleaning up after shim disconnected" id=5f8826e60ab3613466c8ca867154fe39b3a348334de63c457b1865b7d01cd0db namespace=k8s.io
	Oct 06 01:05:35 addons-565340 containerd[689]: time="2023-10-06T01:05:35.663551940Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 06 01:05:35 addons-565340 containerd[689]: time="2023-10-06T01:05:35.693714785Z" level=info msg="StopContainer for \"5f8826e60ab3613466c8ca867154fe39b3a348334de63c457b1865b7d01cd0db\" returns successfully"
	Oct 06 01:05:35 addons-565340 containerd[689]: time="2023-10-06T01:05:35.694518606Z" level=info msg="StopPodSandbox for \"8a83cc553c1fabd37a30347d4d6b060d28a3c96ab3637fe86e72d22da21d622a\""
	Oct 06 01:05:35 addons-565340 containerd[689]: time="2023-10-06T01:05:35.694922956Z" level=info msg="Container to stop \"5f8826e60ab3613466c8ca867154fe39b3a348334de63c457b1865b7d01cd0db\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Oct 06 01:05:35 addons-565340 containerd[689]: time="2023-10-06T01:05:35.748969802Z" level=info msg="shim disconnected" id=8a83cc553c1fabd37a30347d4d6b060d28a3c96ab3637fe86e72d22da21d622a namespace=k8s.io
	Oct 06 01:05:35 addons-565340 containerd[689]: time="2023-10-06T01:05:35.749013868Z" level=warning msg="cleaning up after shim disconnected" id=8a83cc553c1fabd37a30347d4d6b060d28a3c96ab3637fe86e72d22da21d622a namespace=k8s.io
	Oct 06 01:05:35 addons-565340 containerd[689]: time="2023-10-06T01:05:35.749022258Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 06 01:05:35 addons-565340 containerd[689]: time="2023-10-06T01:05:35.829763460Z" level=info msg="TearDown network for sandbox \"8a83cc553c1fabd37a30347d4d6b060d28a3c96ab3637fe86e72d22da21d622a\" successfully"
	Oct 06 01:05:35 addons-565340 containerd[689]: time="2023-10-06T01:05:35.829956140Z" level=info msg="StopPodSandbox for \"8a83cc553c1fabd37a30347d4d6b060d28a3c96ab3637fe86e72d22da21d622a\" returns successfully"
	Oct 06 01:05:35 addons-565340 containerd[689]: time="2023-10-06T01:05:35.878225413Z" level=info msg="RemoveContainer for \"5f8826e60ab3613466c8ca867154fe39b3a348334de63c457b1865b7d01cd0db\""
	Oct 06 01:05:35 addons-565340 containerd[689]: time="2023-10-06T01:05:35.892240049Z" level=info msg="RemoveContainer for \"5f8826e60ab3613466c8ca867154fe39b3a348334de63c457b1865b7d01cd0db\" returns successfully"
	Oct 06 01:05:35 addons-565340 containerd[689]: time="2023-10-06T01:05:35.892962702Z" level=error msg="ContainerStatus for \"5f8826e60ab3613466c8ca867154fe39b3a348334de63c457b1865b7d01cd0db\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5f8826e60ab3613466c8ca867154fe39b3a348334de63c457b1865b7d01cd0db\": not found"
	
	* 
	* ==> coredns [368f71dbcb2d3034f059177e098c101a602f3e6b24e9fab70a1531fdfcd8d361] <==
	* [INFO] 10.244.0.8:45679 - 47748 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000329571s
	[INFO] 10.244.0.8:33860 - 37154 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000678884s
	[INFO] 10.244.0.8:33860 - 39457 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000670909s
	[INFO] 10.244.0.8:58680 - 23285 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000171565s
	[INFO] 10.244.0.8:58680 - 48122 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000545925s
	[INFO] 10.244.0.8:44749 - 17990 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000351919s
	[INFO] 10.244.0.8:44749 - 59204 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000583772s
	[INFO] 10.244.0.8:49400 - 45279 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000286224s
	[INFO] 10.244.0.8:49400 - 44252 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000099059s
	[INFO] 10.244.0.8:55123 - 27868 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000215834s
	[INFO] 10.244.0.8:55123 - 60639 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000113421s
	[INFO] 10.244.0.8:36268 - 3566 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000138517s
	[INFO] 10.244.0.8:36268 - 3308 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000126837s
	[INFO] 10.244.0.8:54776 - 14810 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000150364s
	[INFO] 10.244.0.8:54776 - 35800 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000148694s
	[INFO] 10.244.0.20:46616 - 23229 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000291121s
	[INFO] 10.244.0.20:46993 - 30199 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001161994s
	[INFO] 10.244.0.20:41181 - 31572 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000153464s
	[INFO] 10.244.0.20:52777 - 10150 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000139719s
	[INFO] 10.244.0.20:58092 - 30271 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000183594s
	[INFO] 10.244.0.20:53468 - 21797 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000080995s
	[INFO] 10.244.0.20:40513 - 47875 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000824488s
	[INFO] 10.244.0.20:33244 - 45493 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 230 0.000502078s
	[INFO] 10.244.0.23:38858 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000346101s
	[INFO] 10.244.0.23:47165 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000121178s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-565340
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-565340
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=84890cb24d0240d9d992d7c7712ee519ceed4154
	                    minikube.k8s.io/name=addons-565340
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_06T01_01_56_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-565340
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-565340"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Oct 2023 01:01:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-565340
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Oct 2023 01:05:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Oct 2023 01:05:30 +0000   Fri, 06 Oct 2023 01:01:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Oct 2023 01:05:30 +0000   Fri, 06 Oct 2023 01:01:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Oct 2023 01:05:30 +0000   Fri, 06 Oct 2023 01:01:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Oct 2023 01:05:30 +0000   Fri, 06 Oct 2023 01:01:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.147
	  Hostname:    addons-565340
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 c29f0afc5441426e84e0de4d0e9a12cc
	  System UUID:                c29f0afc-5441-426e-84e0-de4d0e9a12cc
	  Boot ID:                    14393d40-c221-4088-a9fd-9fdb1dcf2941
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-v2757         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	  gcp-auth                    gcp-auth-d4c87556c-fqkvf                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m12s
	  headlamp                    headlamp-58b88cff49-hgwch                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 coredns-5dd5756b68-p8qsp                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m27s
	  kube-system                 csi-hostpath-attacher-0                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m14s
	  kube-system                 csi-hostpath-resizer-0                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m14s
	  kube-system                 csi-hostpathplugin-mss6x                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m14s
	  kube-system                 etcd-addons-565340                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3m40s
	  kube-system                 kube-apiserver-addons-565340             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 kube-controller-manager-addons-565340    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 kube-ingress-dns-minikube                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m21s
	  kube-system                 kube-proxy-4lr7l                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                 kube-scheduler-addons-565340             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 snapshot-controller-58dbcc7b99-7f9jv     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m22s
	  kube-system                 snapshot-controller-58dbcc7b99-fqqnt     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m22s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m25s  kube-proxy       
	  Normal  Starting                 3m41s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m41s  kubelet          Node addons-565340 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m41s  kubelet          Node addons-565340 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m41s  kubelet          Node addons-565340 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m40s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m40s  kubelet          Node addons-565340 status is now: NodeReady
	  Normal  RegisteredNode           3m28s  node-controller  Node addons-565340 event: Registered Node addons-565340 in Controller
	
	* 
	* ==> dmesg <==
	* [  +3.386319] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.145319] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.089042] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.802419] systemd-fstab-generator[558]: Ignoring "noauto" for root device
	[  +0.107991] systemd-fstab-generator[569]: Ignoring "noauto" for root device
	[  +0.150452] systemd-fstab-generator[583]: Ignoring "noauto" for root device
	[  +0.114510] systemd-fstab-generator[594]: Ignoring "noauto" for root device
	[  +0.251448] systemd-fstab-generator[621]: Ignoring "noauto" for root device
	[  +5.658499] systemd-fstab-generator[680]: Ignoring "noauto" for root device
	[ +20.275468] systemd-fstab-generator[987]: Ignoring "noauto" for root device
	[  +8.724692] systemd-fstab-generator[1338]: Ignoring "noauto" for root device
	[Oct 6 01:02] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.176560] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.041685] kauditd_printk_skb: 27 callbacks suppressed
	[ +11.924527] kauditd_printk_skb: 2 callbacks suppressed
	[Oct 6 01:03] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.152339] kauditd_printk_skb: 11 callbacks suppressed
	[Oct 6 01:04] kauditd_printk_skb: 14 callbacks suppressed
	[Oct 6 01:05] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.027346] kauditd_printk_skb: 19 callbacks suppressed
	[ +16.177959] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.120070] kauditd_printk_skb: 29 callbacks suppressed
	
	* 
	* ==> etcd [14329d92c3287792021d19cf8dbf2f31e0784d71acab3814de0be076b35ed5c7] <==
	* {"level":"info","ts":"2023-10-06T01:01:51.782106Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.147:2379"}
	{"level":"info","ts":"2023-10-06T01:01:51.782185Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"582b8c8375119e1d","local-member-id":"c194f0f1585e7a7d","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-06T01:01:51.782232Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-06T01:01:51.782246Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-06T01:01:51.775066Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-06T01:01:51.782256Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-06T01:02:18.693349Z","caller":"traceutil/trace.go:171","msg":"trace[9652333] linearizableReadLoop","detail":"{readStateIndex:619; appliedIndex:618; }","duration":"126.907186ms","start":"2023-10-06T01:02:18.566414Z","end":"2023-10-06T01:02:18.693321Z","steps":["trace[9652333] 'read index received'  (duration: 126.785368ms)","trace[9652333] 'applied index is now lower than readState.Index'  (duration: 120.226µs)"],"step_count":2}
	{"level":"info","ts":"2023-10-06T01:02:18.693595Z","caller":"traceutil/trace.go:171","msg":"trace[741930275] transaction","detail":"{read_only:false; response_revision:603; number_of_response:1; }","duration":"145.41043ms","start":"2023-10-06T01:02:18.548176Z","end":"2023-10-06T01:02:18.693587Z","steps":["trace[741930275] 'process raft request'  (duration: 145.062324ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-06T01:02:18.693758Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.346131ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-p8qsp\" ","response":"range_response_count:1 size:4742"}
	{"level":"info","ts":"2023-10-06T01:02:18.693799Z","caller":"traceutil/trace.go:171","msg":"trace[1046611944] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-p8qsp; range_end:; response_count:1; response_revision:603; }","duration":"127.400009ms","start":"2023-10-06T01:02:18.566391Z","end":"2023-10-06T01:02:18.693791Z","steps":["trace[1046611944] 'agreement among raft nodes before linearized reading'  (duration: 127.292211ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-06T01:02:18.695107Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.849621ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-06T01:02:18.695145Z","caller":"traceutil/trace.go:171","msg":"trace[736206266] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:603; }","duration":"119.892084ms","start":"2023-10-06T01:02:18.575245Z","end":"2023-10-06T01:02:18.695137Z","steps":["trace[736206266] 'agreement among raft nodes before linearized reading'  (duration: 119.540415ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-06T01:02:29.24155Z","caller":"traceutil/trace.go:171","msg":"trace[1941589527] linearizableReadLoop","detail":"{readStateIndex:823; appliedIndex:822; }","duration":"176.21595ms","start":"2023-10-06T01:02:29.065319Z","end":"2023-10-06T01:02:29.241535Z","steps":["trace[1941589527] 'read index received'  (duration: 175.919214ms)","trace[1941589527] 'applied index is now lower than readState.Index'  (duration: 295.547µs)"],"step_count":2}
	{"level":"info","ts":"2023-10-06T01:02:29.241706Z","caller":"traceutil/trace.go:171","msg":"trace[1347693743] transaction","detail":"{read_only:false; response_revision:805; number_of_response:1; }","duration":"379.625755ms","start":"2023-10-06T01:02:28.862073Z","end":"2023-10-06T01:02:29.241699Z","steps":["trace[1347693743] 'process raft request'  (duration: 379.216936ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-06T01:02:29.242266Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.593501ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:81738"}
	{"level":"info","ts":"2023-10-06T01:02:29.242327Z","caller":"traceutil/trace.go:171","msg":"trace[168036984] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:805; }","duration":"108.70555ms","start":"2023-10-06T01:02:29.133612Z","end":"2023-10-06T01:02:29.242318Z","steps":["trace[168036984] 'agreement among raft nodes before linearized reading'  (duration: 108.333932ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-06T01:02:29.242802Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.517296ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-p8qsp\" ","response":"range_response_count:1 size:4742"}
	{"level":"warn","ts":"2023-10-06T01:02:29.243568Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-06T01:02:28.86204Z","time spent":"379.685268ms","remote":"127.0.0.1:33764","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":840,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/ingress-nginx/ingress-nginx-controller-5c4c674fdc-vg4wl.178b5f00b0922fe7\" mod_revision:756 > success:<request_put:<key:\"/registry/events/ingress-nginx/ingress-nginx-controller-5c4c674fdc-vg4wl.178b5f00b0922fe7\" value_size:733 lease:8826363687591813905 >> failure:<request_range:<key:\"/registry/events/ingress-nginx/ingress-nginx-controller-5c4c674fdc-vg4wl.178b5f00b0922fe7\" > >"}
	{"level":"info","ts":"2023-10-06T01:02:29.242917Z","caller":"traceutil/trace.go:171","msg":"trace[1757897200] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-p8qsp; range_end:; response_count:1; response_revision:805; }","duration":"177.637596ms","start":"2023-10-06T01:02:29.065272Z","end":"2023-10-06T01:02:29.24291Z","steps":["trace[1757897200] 'agreement among raft nodes before linearized reading'  (duration: 177.489115ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-06T01:02:38.971779Z","caller":"traceutil/trace.go:171","msg":"trace[1375595241] transaction","detail":"{read_only:false; response_revision:829; number_of_response:1; }","duration":"182.670079ms","start":"2023-10-06T01:02:38.78909Z","end":"2023-10-06T01:02:38.97176Z","steps":["trace[1375595241] 'process raft request'  (duration: 182.569524ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-06T01:03:09.02448Z","caller":"traceutil/trace.go:171","msg":"trace[1889058891] transaction","detail":"{read_only:false; response_revision:922; number_of_response:1; }","duration":"299.937451ms","start":"2023-10-06T01:03:08.724503Z","end":"2023-10-06T01:03:09.024441Z","steps":["trace[1889058891] 'process raft request'  (duration: 299.784149ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-06T01:03:09.024737Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-06T01:03:08.724489Z","time spent":"300.087321ms","remote":"127.0.0.1:33784","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:920 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2023-10-06T01:03:31.226577Z","caller":"traceutil/trace.go:171","msg":"trace[1293439006] transaction","detail":"{read_only:false; response_revision:1014; number_of_response:1; }","duration":"159.571464ms","start":"2023-10-06T01:03:31.066989Z","end":"2023-10-06T01:03:31.22656Z","steps":["trace[1293439006] 'process raft request'  (duration: 159.409966ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-06T01:03:31.230684Z","caller":"traceutil/trace.go:171","msg":"trace[1129187487] transaction","detail":"{read_only:false; response_revision:1015; number_of_response:1; }","duration":"140.162379ms","start":"2023-10-06T01:03:31.090509Z","end":"2023-10-06T01:03:31.230671Z","steps":["trace[1129187487] 'process raft request'  (duration: 139.820363ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-06T01:03:35.95434Z","caller":"traceutil/trace.go:171","msg":"trace[2045944363] transaction","detail":"{read_only:false; response_revision:1034; number_of_response:1; }","duration":"238.267729ms","start":"2023-10-06T01:03:35.716059Z","end":"2023-10-06T01:03:35.954327Z","steps":["trace[2045944363] 'process raft request'  (duration: 238.10038ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [13058ebe1f708b1c67aba80d106ed18542bbfc518e07dbab58cd7cef37b2ebc6] <==
	* 2023/10/06 01:04:47 GCP Auth Webhook started!
	2023/10/06 01:04:48 Ready to marshal response ...
	2023/10/06 01:04:48 Ready to write response ...
	2023/10/06 01:04:48 Ready to marshal response ...
	2023/10/06 01:04:48 Ready to write response ...
	2023/10/06 01:04:57 Ready to marshal response ...
	2023/10/06 01:04:57 Ready to write response ...
	2023/10/06 01:05:01 Ready to marshal response ...
	2023/10/06 01:05:01 Ready to write response ...
	2023/10/06 01:05:04 Ready to marshal response ...
	2023/10/06 01:05:04 Ready to write response ...
	2023/10/06 01:05:05 Ready to marshal response ...
	2023/10/06 01:05:05 Ready to write response ...
	2023/10/06 01:05:21 Ready to marshal response ...
	2023/10/06 01:05:21 Ready to write response ...
	2023/10/06 01:05:24 Ready to marshal response ...
	2023/10/06 01:05:24 Ready to write response ...
	2023/10/06 01:05:26 Ready to marshal response ...
	2023/10/06 01:05:26 Ready to write response ...
	2023/10/06 01:05:30 Ready to marshal response ...
	2023/10/06 01:05:30 Ready to write response ...
	2023/10/06 01:05:30 Ready to marshal response ...
	2023/10/06 01:05:30 Ready to write response ...
	2023/10/06 01:05:30 Ready to marshal response ...
	2023/10/06 01:05:30 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  01:05:37 up 4 min,  0 users,  load average: 2.05, 1.51, 0.69
	Linux addons-565340 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [b8d31b5e3d619da81b9537ec129390fed9dc1e7117fe4355a6bab99688910a12] <==
	* I1006 01:02:23.271638       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I1006 01:02:23.510413       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.108.80.210"}
	W1006 01:02:24.675642       1 aggregator.go:165] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1006 01:02:25.429505       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.108.51.14"}
	E1006 01:02:49.868283       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.5.65:443/apis/metrics.k8s.io/v1beta1: Get "https://10.104.5.65:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.104.5.65:443: connect: connection refused
	W1006 01:02:49.868387       1 handler_proxy.go:93] no RequestInfo found in the context
	E1006 01:02:49.868468       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E1006 01:02:49.869031       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.5.65:443/apis/metrics.k8s.io/v1beta1: Get "https://10.104.5.65:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.104.5.65:443: connect: connection refused
	I1006 01:02:49.870592       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1006 01:02:49.874503       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.5.65:443/apis/metrics.k8s.io/v1beta1: Get "https://10.104.5.65:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.104.5.65:443: connect: connection refused
	I1006 01:02:49.946425       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1006 01:02:53.274902       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1006 01:03:53.278299       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1006 01:04:53.277482       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1006 01:04:59.432728       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I1006 01:04:59.461549       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1006 01:05:00.496193       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1006 01:05:04.994791       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1006 01:05:05.286470       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.151.15"}
	E1006 01:05:17.339634       1 authentication.go:70] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1006 01:05:20.367559       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1006 01:05:26.978416       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.14.210"}
	I1006 01:05:30.709251       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.23.144"}
	
	* 
	* ==> kube-controller-manager [7052bbc0ce89ab31534ba7490a91c018f17bf7232ea6f412bb4abc686af0a6e2] <==
	* I1006 01:05:23.997403       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1006 01:05:26.825726       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1006 01:05:26.869789       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-v2757"
	I1006 01:05:26.895278       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="69.302283ms"
	I1006 01:05:26.921904       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="26.574043ms"
	I1006 01:05:26.922022       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="53.772µs"
	I1006 01:05:28.363554       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1006 01:05:28.366444       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-5c4c674fdc" duration="3.425µs"
	I1006 01:05:28.372581       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1006 01:05:29.474489       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/tiller-deploy-7b677967b9" duration="9.152µs"
	I1006 01:05:30.735942       1 event.go:307] "Event occurred" object="headlamp/headlamp" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set headlamp-58b88cff49 to 1"
	I1006 01:05:30.741051       1 event.go:307] "Event occurred" object="headlamp/headlamp-58b88cff49" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"headlamp-58b88cff49-\" is forbidden: error looking up service account headlamp/headlamp: serviceaccount \"headlamp\" not found"
	I1006 01:05:30.751985       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-58b88cff49" duration="18.093783ms"
	E1006 01:05:30.752031       1 replica_set.go:557] sync "headlamp/headlamp-58b88cff49" failed with pods "headlamp-58b88cff49-" is forbidden: error looking up service account headlamp/headlamp: serviceaccount "headlamp" not found
	I1006 01:05:30.782360       1 event.go:307] "Event occurred" object="headlamp/headlamp-58b88cff49" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: headlamp-58b88cff49-hgwch"
	I1006 01:05:30.795397       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-58b88cff49" duration="43.333906ms"
	I1006 01:05:30.842439       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-58b88cff49" duration="46.917449ms"
	I1006 01:05:30.861290       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-58b88cff49" duration="18.325072ms"
	I1006 01:05:30.861501       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-58b88cff49" duration="39.789µs"
	W1006 01:05:31.866203       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1006 01:05:31.866270       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1006 01:05:32.878731       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="10.250944ms"
	I1006 01:05:32.881445       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="30.996µs"
	I1006 01:05:37.373441       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-attacher"
	I1006 01:05:37.490550       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-resizer"
	
	* 
	* ==> kube-proxy [43ba6e202cd6efdc55545d31bcf63a8c1ca7fa2f493d0fa62558e9055c15c8df] <==
	* I1006 01:02:11.468555       1 server_others.go:69] "Using iptables proxy"
	I1006 01:02:11.484091       1 node.go:141] Successfully retrieved node IP: 192.168.39.147
	I1006 01:02:11.562531       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1006 01:02:11.562586       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1006 01:02:11.585365       1 server_others.go:152] "Using iptables Proxier"
	I1006 01:02:11.585534       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1006 01:02:11.585776       1 server.go:846] "Version info" version="v1.28.2"
	I1006 01:02:11.585866       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 01:02:11.587531       1 config.go:188] "Starting service config controller"
	I1006 01:02:11.587673       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1006 01:02:11.587783       1 config.go:97] "Starting endpoint slice config controller"
	I1006 01:02:11.587936       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1006 01:02:11.592262       1 config.go:315] "Starting node config controller"
	I1006 01:02:11.592432       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1006 01:02:11.688500       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1006 01:02:11.688624       1 shared_informer.go:318] Caches are synced for service config
	I1006 01:02:11.693086       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [c1c8515aedefb6b3d23a49e03c91046aab5360b155b1295f18f778f0916bde26] <==
	* W1006 01:01:54.331988       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1006 01:01:54.332300       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1006 01:01:54.342921       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1006 01:01:54.343121       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1006 01:01:54.460630       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1006 01:01:54.460656       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1006 01:01:54.524235       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1006 01:01:54.524308       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1006 01:01:54.529622       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1006 01:01:54.529937       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1006 01:01:54.583642       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1006 01:01:54.583897       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1006 01:01:54.618377       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1006 01:01:54.618634       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1006 01:01:54.633199       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1006 01:01:54.633490       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1006 01:01:54.663169       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1006 01:01:54.663782       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1006 01:01:54.674217       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1006 01:01:54.674479       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1006 01:01:54.677771       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1006 01:01:54.678087       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1006 01:01:54.694074       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1006 01:01:54.694126       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I1006 01:01:56.999025       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Fri 2023-10-06 01:01:09 UTC, ends at Fri 2023-10-06 01:05:37 UTC. --
	Oct 06 01:05:32 addons-565340 kubelet[1345]: I1006 01:05:32.521336    1345 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c13dfb0b-ad78-4cb0-942a-4bb4fa158e0c-kube-api-access-tk7zk" (OuterVolumeSpecName: "kube-api-access-tk7zk") pod "c13dfb0b-ad78-4cb0-942a-4bb4fa158e0c" (UID: "c13dfb0b-ad78-4cb0-942a-4bb4fa158e0c"). InnerVolumeSpecName "kube-api-access-tk7zk". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 06 01:05:32 addons-565340 kubelet[1345]: I1006 01:05:32.616454    1345 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-tk7zk\" (UniqueName: \"kubernetes.io/projected/c13dfb0b-ad78-4cb0-942a-4bb4fa158e0c-kube-api-access-tk7zk\") on node \"addons-565340\" DevicePath \"\""
	Oct 06 01:05:32 addons-565340 kubelet[1345]: I1006 01:05:32.825012    1345 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="fdeba363-b0c0-4f06-bf16-be91c61a7698" path="/var/lib/kubelet/pods/fdeba363-b0c0-4f06-bf16-be91c61a7698/volumes"
	Oct 06 01:05:32 addons-565340 kubelet[1345]: I1006 01:05:32.859261    1345 scope.go:117] "RemoveContainer" containerID="adb690399fcaa11ba8d50a18697cf62d171f366b5bb931b383254af2d9dd671d"
	Oct 06 01:05:32 addons-565340 kubelet[1345]: I1006 01:05:32.879613    1345 scope.go:117] "RemoveContainer" containerID="adb690399fcaa11ba8d50a18697cf62d171f366b5bb931b383254af2d9dd671d"
	Oct 06 01:05:32 addons-565340 kubelet[1345]: E1006 01:05:32.880161    1345 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"adb690399fcaa11ba8d50a18697cf62d171f366b5bb931b383254af2d9dd671d\": not found" containerID="adb690399fcaa11ba8d50a18697cf62d171f366b5bb931b383254af2d9dd671d"
	Oct 06 01:05:32 addons-565340 kubelet[1345]: I1006 01:05:32.880235    1345 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"adb690399fcaa11ba8d50a18697cf62d171f366b5bb931b383254af2d9dd671d"} err="failed to get container status \"adb690399fcaa11ba8d50a18697cf62d171f366b5bb931b383254af2d9dd671d\": rpc error: code = NotFound desc = an error occurred when try to find container \"adb690399fcaa11ba8d50a18697cf62d171f366b5bb931b383254af2d9dd671d\": not found"
	Oct 06 01:05:32 addons-565340 kubelet[1345]: I1006 01:05:32.893547    1345 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-v2757" podStartSLOduration=3.216332634 podCreationTimestamp="2023-10-06 01:05:26 +0000 UTC" firstStartedPulling="2023-10-06 01:05:28.122151965 +0000 UTC m=+211.547211536" lastFinishedPulling="2023-10-06 01:05:31.799332235 +0000 UTC m=+215.224391807" observedRunningTime="2023-10-06 01:05:32.867989829 +0000 UTC m=+216.293049418" watchObservedRunningTime="2023-10-06 01:05:32.893512905 +0000 UTC m=+216.318572497"
	Oct 06 01:05:34 addons-565340 kubelet[1345]: I1006 01:05:34.825986    1345 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c13dfb0b-ad78-4cb0-942a-4bb4fa158e0c" path="/var/lib/kubelet/pods/c13dfb0b-ad78-4cb0-942a-4bb4fa158e0c/volumes"
	Oct 06 01:05:35 addons-565340 kubelet[1345]: I1006 01:05:35.876242    1345 scope.go:117] "RemoveContainer" containerID="5f8826e60ab3613466c8ca867154fe39b3a348334de63c457b1865b7d01cd0db"
	Oct 06 01:05:35 addons-565340 kubelet[1345]: I1006 01:05:35.892629    1345 scope.go:117] "RemoveContainer" containerID="5f8826e60ab3613466c8ca867154fe39b3a348334de63c457b1865b7d01cd0db"
	Oct 06 01:05:35 addons-565340 kubelet[1345]: E1006 01:05:35.893347    1345 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5f8826e60ab3613466c8ca867154fe39b3a348334de63c457b1865b7d01cd0db\": not found" containerID="5f8826e60ab3613466c8ca867154fe39b3a348334de63c457b1865b7d01cd0db"
	Oct 06 01:05:35 addons-565340 kubelet[1345]: I1006 01:05:35.893409    1345 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5f8826e60ab3613466c8ca867154fe39b3a348334de63c457b1865b7d01cd0db"} err="failed to get container status \"5f8826e60ab3613466c8ca867154fe39b3a348334de63c457b1865b7d01cd0db\": rpc error: code = NotFound desc = an error occurred when try to find container \"5f8826e60ab3613466c8ca867154fe39b3a348334de63c457b1865b7d01cd0db\": not found"
	Oct 06 01:05:35 addons-565340 kubelet[1345]: I1006 01:05:35.941650    1345 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbnb8\" (UniqueName: \"kubernetes.io/projected/5dd02a93-5133-48f0-99dd-bf2c3c2a7c80-kube-api-access-xbnb8\") pod \"5dd02a93-5133-48f0-99dd-bf2c3c2a7c80\" (UID: \"5dd02a93-5133-48f0-99dd-bf2c3c2a7c80\") "
	Oct 06 01:05:35 addons-565340 kubelet[1345]: I1006 01:05:35.942318    1345 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^6dc1622e-63e4-11ee-8391-1e47fd245466\") pod \"5dd02a93-5133-48f0-99dd-bf2c3c2a7c80\" (UID: \"5dd02a93-5133-48f0-99dd-bf2c3c2a7c80\") "
	Oct 06 01:05:35 addons-565340 kubelet[1345]: I1006 01:05:35.942946    1345 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5dd02a93-5133-48f0-99dd-bf2c3c2a7c80-gcp-creds\") pod \"5dd02a93-5133-48f0-99dd-bf2c3c2a7c80\" (UID: \"5dd02a93-5133-48f0-99dd-bf2c3c2a7c80\") "
	Oct 06 01:05:35 addons-565340 kubelet[1345]: I1006 01:05:35.943060    1345 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5dd02a93-5133-48f0-99dd-bf2c3c2a7c80-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "5dd02a93-5133-48f0-99dd-bf2c3c2a7c80" (UID: "5dd02a93-5133-48f0-99dd-bf2c3c2a7c80"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Oct 06 01:05:35 addons-565340 kubelet[1345]: I1006 01:05:35.949754    1345 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dd02a93-5133-48f0-99dd-bf2c3c2a7c80-kube-api-access-xbnb8" (OuterVolumeSpecName: "kube-api-access-xbnb8") pod "5dd02a93-5133-48f0-99dd-bf2c3c2a7c80" (UID: "5dd02a93-5133-48f0-99dd-bf2c3c2a7c80"). InnerVolumeSpecName "kube-api-access-xbnb8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 06 01:05:35 addons-565340 kubelet[1345]: I1006 01:05:35.980515    1345 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^6dc1622e-63e4-11ee-8391-1e47fd245466" (OuterVolumeSpecName: "task-pv-storage") pod "5dd02a93-5133-48f0-99dd-bf2c3c2a7c80" (UID: "5dd02a93-5133-48f0-99dd-bf2c3c2a7c80"). InnerVolumeSpecName "pvc-64b4d807-a61f-4833-8327-a0b2a4595a27". PluginName "kubernetes.io/csi", VolumeGidValue ""
	Oct 06 01:05:36 addons-565340 kubelet[1345]: I1006 01:05:36.043303    1345 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xbnb8\" (UniqueName: \"kubernetes.io/projected/5dd02a93-5133-48f0-99dd-bf2c3c2a7c80-kube-api-access-xbnb8\") on node \"addons-565340\" DevicePath \"\""
	Oct 06 01:05:36 addons-565340 kubelet[1345]: I1006 01:05:36.043386    1345 reconciler_common.go:293] "operationExecutor.UnmountDevice started for volume \"pvc-64b4d807-a61f-4833-8327-a0b2a4595a27\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^6dc1622e-63e4-11ee-8391-1e47fd245466\") on node \"addons-565340\" "
	Oct 06 01:05:36 addons-565340 kubelet[1345]: I1006 01:05:36.043402    1345 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5dd02a93-5133-48f0-99dd-bf2c3c2a7c80-gcp-creds\") on node \"addons-565340\" DevicePath \"\""
	Oct 06 01:05:36 addons-565340 kubelet[1345]: I1006 01:05:36.050052    1345 operation_generator.go:992] UnmountDevice succeeded for volume "pvc-64b4d807-a61f-4833-8327-a0b2a4595a27" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^6dc1622e-63e4-11ee-8391-1e47fd245466") on node "addons-565340"
	Oct 06 01:05:36 addons-565340 kubelet[1345]: I1006 01:05:36.144085    1345 reconciler_common.go:300] "Volume detached for volume \"pvc-64b4d807-a61f-4833-8327-a0b2a4595a27\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^6dc1622e-63e4-11ee-8391-1e47fd245466\") on node \"addons-565340\" DevicePath \"\""
	Oct 06 01:05:36 addons-565340 kubelet[1345]: I1006 01:05:36.825658    1345 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="5dd02a93-5133-48f0-99dd-bf2c3c2a7c80" path="/var/lib/kubelet/pods/5dd02a93-5133-48f0-99dd-bf2c3c2a7c80/volumes"
	
	* 
	* ==> storage-provisioner [4381d8fa89f72470d2271da46fdb192edcb8e01e6b5738c5da0f517eb044fb67] <==
	* I1006 01:02:52.584424       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1006 01:02:52.595006       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1006 01:02:52.596004       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1006 01:02:52.604026       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1006 01:02:52.605158       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-565340_75ffe95c-371d-4eea-b75c-3d3422377344!
	I1006 01:02:52.604649       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d550bbd5-b8cc-4239-832a-237ebb2b53e6", APIVersion:"v1", ResourceVersion:"878", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-565340_75ffe95c-371d-4eea-b75c-3d3422377344 became leader
	I1006 01:02:52.705686       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-565340_75ffe95c-371d-4eea-b75c-3d3422377344!
	E1006 01:05:01.296905       1 controller.go:1050] claim "0eee7991-a6eb-4053-90a9-4fed6ee19f1f" in work queue no longer exists
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-565340 -n addons-565340
helpers_test.go:261: (dbg) Run:  kubectl --context addons-565340 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: headlamp-58b88cff49-hgwch csi-hostpath-attacher-0
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-565340 describe pod headlamp-58b88cff49-hgwch csi-hostpath-attacher-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-565340 describe pod headlamp-58b88cff49-hgwch csi-hostpath-attacher-0: exit status 1 (63.301257ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "headlamp-58b88cff49-hgwch" not found
	Error from server (NotFound): pods "csi-hostpath-attacher-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-565340 describe pod headlamp-58b88cff49-hgwch csi-hostpath-attacher-0: exit status 1
--- FAIL: TestAddons/parallel/Ingress (33.93s)

                                                
                                    
x
+
TestErrorSpam/setup (62.97s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-165548 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-165548 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-165548 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-165548 --driver=kvm2  --container-runtime=containerd: (1m2.971195323s)
error_spam_test.go:96: unexpected stderr: "X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17314-66550/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0: no such file or directory"
error_spam_test.go:110: minikube stdout:
* [nospam-165548] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=17314
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/17314-66550/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-66550/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the kvm2 driver based on user configuration
* Starting control plane node nospam-165548 in cluster nospam-165548
* Creating kvm2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.28.2 on containerd 1.7.6 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-165548" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17314-66550/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0: no such file or directory
--- FAIL: TestErrorSpam/setup (62.97s)

                                                
                                    

Test pass (268/306)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 35.75
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.28.2/json-events 23.53
11 TestDownloadOnly/v1.28.2/preload-exists 0
15 TestDownloadOnly/v1.28.2/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.14
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.13
19 TestBinaryMirror 0.57
20 TestOffline 147.02
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
25 TestAddons/Setup 230.86
27 TestAddons/parallel/Registry 16.84
29 TestAddons/parallel/InspektorGadget 11.24
30 TestAddons/parallel/MetricsServer 6.06
31 TestAddons/parallel/HelmTiller 13.25
33 TestAddons/parallel/CSI 56.6
34 TestAddons/parallel/Headlamp 33.21
35 TestAddons/parallel/CloudSpanner 5.61
36 TestAddons/parallel/LocalPath 57.15
37 TestAddons/parallel/NvidiaDevicePlugin 5.75
40 TestAddons/serial/GCPAuth/Namespaces 0.13
41 TestAddons/StoppedEnableDisable 92.73
42 TestCertOptions 88.1
43 TestCertExpiration 311.95
45 TestForceSystemdFlag 137.59
46 TestForceSystemdEnv 69.76
48 TestKVMDriverInstallOrUpdate 10.37
53 TestErrorSpam/start 0.38
54 TestErrorSpam/status 0.77
55 TestErrorSpam/pause 1.53
56 TestErrorSpam/unpause 1.65
57 TestErrorSpam/stop 1.53
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 86.17
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 5.8
64 TestFunctional/serial/KubeContext 0.04
65 TestFunctional/serial/KubectlGetPods 0.07
68 TestFunctional/serial/CacheCmd/cache/add_remote 3.89
69 TestFunctional/serial/CacheCmd/cache/add_local 3.76
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
71 TestFunctional/serial/CacheCmd/cache/list 0.06
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
73 TestFunctional/serial/CacheCmd/cache/cache_reload 2.07
74 TestFunctional/serial/CacheCmd/cache/delete 0.12
75 TestFunctional/serial/MinikubeKubectlCmd 0.12
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
77 TestFunctional/serial/ExtraConfig 39.57
78 TestFunctional/serial/ComponentHealth 0.07
79 TestFunctional/serial/LogsCmd 1.51
80 TestFunctional/serial/LogsFileCmd 1.5
81 TestFunctional/serial/InvalidService 5.27
83 TestFunctional/parallel/ConfigCmd 0.44
84 TestFunctional/parallel/DashboardCmd 23.39
85 TestFunctional/parallel/DryRun 0.33
86 TestFunctional/parallel/InternationalLanguage 0.16
87 TestFunctional/parallel/StatusCmd 1.1
91 TestFunctional/parallel/ServiceCmdConnect 8.6
92 TestFunctional/parallel/AddonsCmd 0.16
93 TestFunctional/parallel/PersistentVolumeClaim 53.58
95 TestFunctional/parallel/SSHCmd 0.45
96 TestFunctional/parallel/CpCmd 0.95
97 TestFunctional/parallel/MySQL 32.85
98 TestFunctional/parallel/FileSync 0.25
99 TestFunctional/parallel/CertSync 1.5
103 TestFunctional/parallel/NodeLabels 0.06
105 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
107 TestFunctional/parallel/License 0.77
108 TestFunctional/parallel/Version/short 0.06
109 TestFunctional/parallel/Version/components 0.58
110 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
111 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
112 TestFunctional/parallel/ImageCommands/ImageListJson 0.41
113 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
114 TestFunctional/parallel/ImageCommands/ImageBuild 5.47
115 TestFunctional/parallel/ImageCommands/Setup 2.62
116 TestFunctional/parallel/ServiceCmd/DeployApp 11.23
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.99
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 5.85
128 TestFunctional/parallel/ServiceCmd/List 0.34
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.28
130 TestFunctional/parallel/ServiceCmd/HTTPS 0.3
131 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.26
132 TestFunctional/parallel/ServiceCmd/Format 0.32
133 TestFunctional/parallel/ServiceCmd/URL 0.3
134 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.1
135 TestFunctional/parallel/ImageCommands/ImageRemove 1.12
136 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.25
137 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
138 TestFunctional/parallel/ProfileCmd/profile_list 0.32
139 TestFunctional/parallel/ProfileCmd/profile_json_output 0.32
140 TestFunctional/parallel/MountCmd/any-port 13.75
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.45
142 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
143 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
144 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
145 TestFunctional/parallel/MountCmd/specific-port 1.95
146 TestFunctional/parallel/MountCmd/VerifyCleanup 1.56
147 TestFunctional/delete_addon-resizer_images 0.07
148 TestFunctional/delete_my-image_image 0.01
149 TestFunctional/delete_minikube_cached_images 0.01
153 TestIngressAddonLegacy/StartLegacyK8sCluster 88.16
155 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 14.97
156 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.58
157 TestIngressAddonLegacy/serial/ValidateIngressAddons 39.67
160 TestJSONOutput/start/Command 114.72
161 TestJSONOutput/start/Audit 0
163 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
164 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
166 TestJSONOutput/pause/Command 0.67
167 TestJSONOutput/pause/Audit 0
169 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
170 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/unpause/Command 0.61
173 TestJSONOutput/unpause/Audit 0
175 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/stop/Command 2.1
179 TestJSONOutput/stop/Audit 0
181 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
183 TestErrorJSONOutput 0.22
188 TestMainNoArgs 0.06
189 TestMinikubeProfile 129.68
192 TestMountStart/serial/StartWithMountFirst 27.9
193 TestMountStart/serial/VerifyMountFirst 0.4
194 TestMountStart/serial/StartWithMountSecond 27.99
195 TestMountStart/serial/VerifyMountSecond 0.4
196 TestMountStart/serial/DeleteFirst 0.69
197 TestMountStart/serial/VerifyMountPostDelete 0.4
198 TestMountStart/serial/Stop 1.22
199 TestMountStart/serial/RestartStopped 24.04
200 TestMountStart/serial/VerifyMountPostStop 0.42
203 TestMultiNode/serial/FreshStart2Nodes 133.69
204 TestMultiNode/serial/DeployApp2Nodes 6.43
205 TestMultiNode/serial/PingHostFrom2Pods 0.94
206 TestMultiNode/serial/AddNode 44.08
207 TestMultiNode/serial/ProfileList 0.23
208 TestMultiNode/serial/CopyFile 7.81
209 TestMultiNode/serial/StopNode 2.22
210 TestMultiNode/serial/StartAfterStop 27.9
211 TestMultiNode/serial/RestartKeepsNodes 308.94
212 TestMultiNode/serial/DeleteNode 1.78
213 TestMultiNode/serial/StopMultiNode 183.95
214 TestMultiNode/serial/RestartMultiNode 91.73
215 TestMultiNode/serial/ValidateNameConflict 64.95
220 TestPreload 265.63
222 TestScheduledStopUnix 134.33
226 TestRunningBinaryUpgrade 174.03
228 TestKubernetesUpgrade 194.18
232 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
235 TestNoKubernetes/serial/StartWithK8s 120.76
240 TestNetworkPlugins/group/false 3.37
244 TestNoKubernetes/serial/StartWithStopK8s 20.67
245 TestNoKubernetes/serial/Start 29.5
253 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
254 TestNoKubernetes/serial/ProfileList 0.74
255 TestNoKubernetes/serial/Stop 1.25
256 TestNoKubernetes/serial/StartNoArgs 46.12
257 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
259 TestPause/serial/Start 157.56
260 TestStoppedBinaryUpgrade/Setup 3.16
261 TestStoppedBinaryUpgrade/Upgrade 123.98
262 TestNetworkPlugins/group/auto/Start 123.64
263 TestPause/serial/SecondStartNoReconfiguration 21.64
264 TestNetworkPlugins/group/kindnet/Start 105.44
265 TestPause/serial/Pause 0.77
266 TestPause/serial/VerifyStatus 0.29
267 TestPause/serial/Unpause 0.93
268 TestPause/serial/PauseAgain 0.91
269 TestPause/serial/DeletePaused 1.13
270 TestPause/serial/VerifyDeletedResources 3.34
271 TestNetworkPlugins/group/calico/Start 157.48
272 TestStoppedBinaryUpgrade/MinikubeLogs 1.19
273 TestNetworkPlugins/group/custom-flannel/Start 123.74
274 TestNetworkPlugins/group/auto/KubeletFlags 0.27
275 TestNetworkPlugins/group/auto/NetCatPod 11.44
276 TestNetworkPlugins/group/auto/DNS 0.26
277 TestNetworkPlugins/group/auto/Localhost 0.21
278 TestNetworkPlugins/group/auto/HairPin 0.2
279 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
280 TestNetworkPlugins/group/kindnet/KubeletFlags 0.4
281 TestNetworkPlugins/group/kindnet/NetCatPod 11.52
282 TestNetworkPlugins/group/enable-default-cni/Start 91.08
283 TestNetworkPlugins/group/kindnet/DNS 0.23
284 TestNetworkPlugins/group/kindnet/Localhost 0.25
285 TestNetworkPlugins/group/kindnet/HairPin 0.22
286 TestNetworkPlugins/group/flannel/Start 122.48
287 TestNetworkPlugins/group/calico/ControllerPod 5.04
288 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
289 TestNetworkPlugins/group/calico/KubeletFlags 0.24
290 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.43
291 TestNetworkPlugins/group/calico/NetCatPod 10.45
292 TestNetworkPlugins/group/custom-flannel/DNS 0.25
293 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
294 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
295 TestNetworkPlugins/group/calico/DNS 0.22
296 TestNetworkPlugins/group/calico/Localhost 0.34
297 TestNetworkPlugins/group/calico/HairPin 0.22
298 TestNetworkPlugins/group/bridge/Start 126.36
300 TestStartStop/group/old-k8s-version/serial/FirstStart 159
301 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
302 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.49
303 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
304 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
305 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
307 TestStartStop/group/no-preload/serial/FirstStart 91.61
308 TestNetworkPlugins/group/flannel/ControllerPod 5.03
309 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
310 TestNetworkPlugins/group/flannel/NetCatPod 10.41
311 TestNetworkPlugins/group/flannel/DNS 0.2
312 TestNetworkPlugins/group/flannel/Localhost 0.18
313 TestNetworkPlugins/group/flannel/HairPin 0.19
315 TestStartStop/group/embed-certs/serial/FirstStart 88.46
316 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
317 TestNetworkPlugins/group/bridge/NetCatPod 11.32
318 TestStartStop/group/no-preload/serial/DeployApp 12.5
319 TestNetworkPlugins/group/bridge/DNS 0.19
320 TestNetworkPlugins/group/bridge/Localhost 0.25
321 TestNetworkPlugins/group/bridge/HairPin 0.16
322 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.46
323 TestStartStop/group/no-preload/serial/Stop 92.11
325 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 79.51
326 TestStartStop/group/old-k8s-version/serial/DeployApp 11.49
327 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.87
328 TestStartStop/group/old-k8s-version/serial/Stop 92.28
329 TestStartStop/group/embed-certs/serial/DeployApp 12.75
330 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.3
331 TestStartStop/group/embed-certs/serial/Stop 91.77
332 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.47
333 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
334 TestStartStop/group/no-preload/serial/SecondStart 328.83
335 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.16
336 TestStartStop/group/default-k8s-diff-port/serial/Stop 92.05
337 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
338 TestStartStop/group/old-k8s-version/serial/SecondStart 451.26
339 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.29
340 TestStartStop/group/embed-certs/serial/SecondStart 331.29
341 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
342 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 332.41
343 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.02
344 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
345 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
346 TestStartStop/group/no-preload/serial/Pause 2.78
348 TestStartStop/group/newest-cni/serial/FirstStart 86.21
349 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 16.03
350 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.13
351 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.38
352 TestStartStop/group/embed-certs/serial/Pause 3.48
353 TestStartStop/group/newest-cni/serial/DeployApp 0
354 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.64
355 TestStartStop/group/newest-cni/serial/Stop 7.15
356 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 19.03
357 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
358 TestStartStop/group/newest-cni/serial/SecondStart 49.7
359 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
360 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
361 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.59
362 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
363 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
364 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
365 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
366 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
367 TestStartStop/group/newest-cni/serial/Pause 2.71
368 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
369 TestStartStop/group/old-k8s-version/serial/Pause 3.37
x
+
TestDownloadOnly/v1.16.0/json-events (35.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-034185 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-034185 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (35.747750737s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (35.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-034185
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-034185: exit status 85 (73.665958ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-034185 | jenkins | v1.31.2 | 06 Oct 23 00:59 UTC |          |
	|         | -p download-only-034185        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/06 00:59:56
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 00:59:56.438593   73852 out.go:296] Setting OutFile to fd 1 ...
	I1006 00:59:56.438851   73852 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 00:59:56.438860   73852 out.go:309] Setting ErrFile to fd 2...
	I1006 00:59:56.438865   73852 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 00:59:56.439065   73852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-66550/.minikube/bin
	W1006 00:59:56.439184   73852 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17314-66550/.minikube/config/config.json: open /home/jenkins/minikube-integration/17314-66550/.minikube/config/config.json: no such file or directory
	I1006 00:59:56.439753   73852 out.go:303] Setting JSON to true
	I1006 00:59:56.440709   73852 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":6140,"bootTime":1696547857,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 00:59:56.440772   73852 start.go:138] virtualization: kvm guest
	I1006 00:59:56.443339   73852 out.go:97] [download-only-034185] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1006 00:59:56.444868   73852 out.go:169] MINIKUBE_LOCATION=17314
	W1006 00:59:56.443488   73852 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17314-66550/.minikube/cache/preloaded-tarball: no such file or directory
	I1006 00:59:56.443521   73852 notify.go:220] Checking for updates...
	I1006 00:59:56.447777   73852 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 00:59:56.449230   73852 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17314-66550/kubeconfig
	I1006 00:59:56.450517   73852 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-66550/.minikube
	I1006 00:59:56.451724   73852 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1006 00:59:56.454081   73852 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1006 00:59:56.454353   73852 driver.go:378] Setting default libvirt URI to qemu:///system
	I1006 00:59:56.488824   73852 out.go:97] Using the kvm2 driver based on user configuration
	I1006 00:59:56.488882   73852 start.go:298] selected driver: kvm2
	I1006 00:59:56.488894   73852 start.go:902] validating driver "kvm2" against <nil>
	I1006 00:59:56.489225   73852 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 00:59:56.489310   73852 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17314-66550/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1006 00:59:56.503636   73852 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1006 00:59:56.503680   73852 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1006 00:59:56.504209   73852 start_flags.go:386] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1006 00:59:56.504369   73852 start_flags.go:908] Wait components to verify : map[apiserver:true system_pods:true]
	I1006 00:59:56.504439   73852 cni.go:84] Creating CNI manager for ""
	I1006 00:59:56.504452   73852 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1006 00:59:56.504462   73852 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1006 00:59:56.504470   73852 start_flags.go:323] config:
	{Name:download-only-034185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-034185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1006 00:59:56.504686   73852 iso.go:125] acquiring lock: {Name:mk59b3e5fbcca8f5b6f4ff791dcd43d3ee60c748 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 00:59:56.506595   73852 out.go:97] Downloading VM boot image ...
	I1006 00:59:56.506621   73852 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17314-66550/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I1006 01:00:07.693300   73852 out.go:97] Starting control plane node download-only-034185 in cluster download-only-034185
	I1006 01:00:07.693324   73852 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1006 01:00:07.850249   73852 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I1006 01:00:07.850281   73852 cache.go:57] Caching tarball of preloaded images
	I1006 01:00:07.850493   73852 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1006 01:00:07.852487   73852 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1006 01:00:07.852510   73852 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I1006 01:00:08.018943   73852 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/17314-66550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I1006 01:00:24.390840   73852 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I1006 01:00:24.390940   73852 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17314-66550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I1006 01:00:25.269986   73852 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I1006 01:00:25.270399   73852 profile.go:148] Saving config to /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/download-only-034185/config.json ...
	I1006 01:00:25.270433   73852 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/download-only-034185/config.json: {Name:mk15606602fa95ef304ea5748a626c507d98bf8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 01:00:25.270592   73852 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1006 01:00:25.270783   73852 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17314-66550/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-034185"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/json-events (23.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-034185 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-034185 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (23.531315836s)
--- PASS: TestDownloadOnly/v1.28.2/json-events (23.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/preload-exists
--- PASS: TestDownloadOnly/v1.28.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-034185
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-034185: exit status 85 (75.31136ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-034185 | jenkins | v1.31.2 | 06 Oct 23 00:59 UTC |          |
	|         | -p download-only-034185        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-034185 | jenkins | v1.31.2 | 06 Oct 23 01:00 UTC |          |
	|         | -p download-only-034185        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/06 01:00:32
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 01:00:32.260364   73969 out.go:296] Setting OutFile to fd 1 ...
	I1006 01:00:32.260608   73969 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 01:00:32.260616   73969 out.go:309] Setting ErrFile to fd 2...
	I1006 01:00:32.260620   73969 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 01:00:32.260785   73969 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-66550/.minikube/bin
	W1006 01:00:32.260894   73969 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17314-66550/.minikube/config/config.json: open /home/jenkins/minikube-integration/17314-66550/.minikube/config/config.json: no such file or directory
	I1006 01:00:32.261284   73969 out.go:303] Setting JSON to true
	I1006 01:00:32.262088   73969 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":6176,"bootTime":1696547857,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 01:00:32.262148   73969 start.go:138] virtualization: kvm guest
	I1006 01:00:32.264428   73969 out.go:97] [download-only-034185] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1006 01:00:32.265939   73969 out.go:169] MINIKUBE_LOCATION=17314
	I1006 01:00:32.264619   73969 notify.go:220] Checking for updates...
	I1006 01:00:32.268565   73969 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 01:00:32.269916   73969 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17314-66550/kubeconfig
	I1006 01:00:32.271141   73969 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-66550/.minikube
	I1006 01:00:32.272511   73969 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1006 01:00:32.274890   73969 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1006 01:00:32.275330   73969 config.go:182] Loaded profile config "download-only-034185": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W1006 01:00:32.275375   73969 start.go:810] api.Load failed for download-only-034185: filestore "download-only-034185": Docker machine "download-only-034185" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1006 01:00:32.275443   73969 driver.go:378] Setting default libvirt URI to qemu:///system
	W1006 01:00:32.275486   73969 start.go:810] api.Load failed for download-only-034185: filestore "download-only-034185": Docker machine "download-only-034185" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1006 01:00:32.306564   73969 out.go:97] Using the kvm2 driver based on existing profile
	I1006 01:00:32.306585   73969 start.go:298] selected driver: kvm2
	I1006 01:00:32.306590   73969 start.go:902] validating driver "kvm2" against &{Name:download-only-034185 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-034185 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1006 01:00:32.306942   73969 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 01:00:32.306996   73969 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17314-66550/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1006 01:00:32.320733   73969 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1006 01:00:32.321414   73969 cni.go:84] Creating CNI manager for ""
	I1006 01:00:32.321431   73969 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1006 01:00:32.321442   73969 start_flags.go:323] config:
	{Name:download-only-034185 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:download-only-034185 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1006 01:00:32.321607   73969 iso.go:125] acquiring lock: {Name:mk59b3e5fbcca8f5b6f4ff791dcd43d3ee60c748 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 01:00:32.323080   73969 out.go:97] Starting control plane node download-only-034185 in cluster download-only-034185
	I1006 01:00:32.323095   73969 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime containerd
	I1006 01:00:32.972374   73969 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-containerd-overlay2-amd64.tar.lz4
	I1006 01:00:32.972404   73969 cache.go:57] Caching tarball of preloaded images
	I1006 01:00:32.972555   73969 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime containerd
	I1006 01:00:32.974267   73969 out.go:97] Downloading Kubernetes v1.28.2 preload ...
	I1006 01:00:32.974286   73969 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.2-containerd-overlay2-amd64.tar.lz4 ...
	I1006 01:00:33.131513   73969 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:ae58936c147f05f34778878c23d3887a -> /home/jenkins/minikube-integration/17314-66550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-containerd-overlay2-amd64.tar.lz4
	I1006 01:00:48.317961   73969 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.2-containerd-overlay2-amd64.tar.lz4 ...
	I1006 01:00:48.318060   73969 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17314-66550/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-containerd-overlay2-amd64.tar.lz4 ...
	I1006 01:00:49.222480   73969 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on containerd
	I1006 01:00:49.222634   73969 profile.go:148] Saving config to /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/download-only-034185/config.json ...
	I1006 01:00:49.222853   73969 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime containerd
	I1006 01:00:49.223088   73969 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17314-66550/.minikube/cache/linux/amd64/v1.28.2/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-034185"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-034185
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-300718 --alsologtostderr --binary-mirror http://127.0.0.1:33405 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-300718" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-300718
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (147.02s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-691771 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-691771 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (2m25.7518006s)
helpers_test.go:175: Cleaning up "offline-containerd-691771" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-691771
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-691771: (1.263177153s)
--- PASS: TestOffline (147.02s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-565340
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-565340: exit status 85 (63.828702ms)

                                                
                                                
-- stdout --
	* Profile "addons-565340" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-565340"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-565340
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-565340: exit status 85 (65.079033ms)

                                                
                                                
-- stdout --
	* Profile "addons-565340" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-565340"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (230.86s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-565340 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-565340 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m50.861540141s)
--- PASS: TestAddons/Setup (230.86s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 28.452003ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-jc9w8" [70aa0cb1-3f28-45f9-9091-bd1c0832c6e6] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.02219112s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-446ct" [2914b09b-5bf8-49a0-8281-9f2cf66e5694] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.033310331s
addons_test.go:339: (dbg) Run:  kubectl --context addons-565340 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-565340 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-565340 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.777374207s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-565340 ip
2023/10/06 01:05:03 [DEBUG] GET http://192.168.39.147:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-565340 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.84s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.24s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-7xlgt" [16f3b61f-0c0b-4185-983f-a6f2d7010b9a] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.015251377s
addons_test.go:840: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-565340
addons_test.go:840: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-565340: (6.221966029s)
--- PASS: TestAddons/parallel/InspektorGadget (11.24s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.06s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 28.604529ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-sfzph" [450e3808-b6d6-43ee-9359-1ab47d897c47] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.040065198s
addons_test.go:414: (dbg) Run:  kubectl --context addons-565340 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-565340 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.06s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.25s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 3.534507ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-m5swh" [ef8be544-3c52-471f-8c04-c11fb7a940fe] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.013760302s
addons_test.go:472: (dbg) Run:  kubectl --context addons-565340 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-565340 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.588086016s)
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-565340 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.25s)

                                                
                                    
x
+
TestAddons/parallel/CSI (56.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 28.940131ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-565340 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565340 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565340 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565340 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565340 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565340 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565340 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565340 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565340 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565340 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565340 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565340 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565340 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565340 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565340 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565340 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565340 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565340 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-565340 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [64169bda-4c38-4a37-a777-4f96bdd27ef3] Pending
helpers_test.go:344: "task-pv-pod" [64169bda-4c38-4a37-a777-4f96bdd27ef3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [64169bda-4c38-4a37-a777-4f96bdd27ef3] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.014019024s
addons_test.go:583: (dbg) Run:  kubectl --context addons-565340 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-565340 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-565340 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-565340 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-565340 delete pod task-pv-pod
addons_test.go:593: (dbg) Done: kubectl --context addons-565340 delete pod task-pv-pod: (1.230901666s)
addons_test.go:599: (dbg) Run:  kubectl --context addons-565340 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-565340 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565340 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565340 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565340 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-565340 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [5dd02a93-5133-48f0-99dd-bf2c3c2a7c80] Pending
helpers_test.go:344: "task-pv-pod-restore" [5dd02a93-5133-48f0-99dd-bf2c3c2a7c80] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [5dd02a93-5133-48f0-99dd-bf2c3c2a7c80] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 11.03107481s
addons_test.go:625: (dbg) Run:  kubectl --context addons-565340 delete pod task-pv-pod-restore
addons_test.go:629: (dbg) Run:  kubectl --context addons-565340 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-565340 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-565340 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-565340 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.895274149s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-565340 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (56.60s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (33.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-565340 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-565340 --alsologtostderr -v=1: (1.167689413s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-58b88cff49-hgwch" [eb70bf7b-89f3-4c4a-b5b9-095934f18ac7] Pending
helpers_test.go:344: "headlamp-58b88cff49-hgwch" [eb70bf7b-89f3-4c4a-b5b9-095934f18ac7] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-58b88cff49-hgwch" [eb70bf7b-89f3-4c4a-b5b9-095934f18ac7] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 32.037329366s
--- PASS: TestAddons/parallel/Headlamp (33.21s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7d49f968d9-b42zl" [e2d342e0-24d2-4707-b56d-4dac2c18211b] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.012300858s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-565340
--- PASS: TestAddons/parallel/CloudSpanner (5.61s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.15s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-565340 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-565340 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565340 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565340 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565340 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565340 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565340 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565340 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565340 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565340 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565340 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565340 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [11c636d4-b819-476c-8074-1de7d038aa2f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [11c636d4-b819-476c-8074-1de7d038aa2f] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [11c636d4-b819-476c-8074-1de7d038aa2f] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.012965086s
addons_test.go:890: (dbg) Run:  kubectl --context addons-565340 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-565340 ssh "cat /opt/local-path-provisioner/pvc-0eee7991-a6eb-4053-90a9-4fed6ee19f1f_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-565340 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-565340 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-565340 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-amd64 -p addons-565340 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.499371951s)
--- PASS: TestAddons/parallel/LocalPath (57.15s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.75s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-dzf4h" [d76b8151-cfa3-4792-aaae-7bb5841e6485] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.016502677s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-565340
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.75s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-565340 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-565340 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.73s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-565340
addons_test.go:171: (dbg) Done: out/minikube-linux-amd64 stop -p addons-565340: (1m32.412303683s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-565340
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-565340
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-565340
--- PASS: TestAddons/StoppedEnableDisable (92.73s)

                                                
                                    
x
+
TestCertOptions (88.1s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-468579 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
E1006 01:44:15.521679   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/ingress-addon-legacy-989525/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-468579 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m26.48341499s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-468579 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-468579 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-468579 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-468579" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-468579
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-468579: (1.118070593s)
--- PASS: TestCertOptions (88.10s)

                                                
                                    
x
+
TestCertExpiration (311.95s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-815106 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-815106 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m37.678474554s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-815106 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-815106 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (32.988338734s)
helpers_test.go:175: Cleaning up "cert-expiration-815106" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-815106
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-815106: (1.283194614s)
--- PASS: TestCertExpiration (311.95s)

                                                
                                    
x
+
TestForceSystemdFlag (137.59s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-270995 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E1006 01:42:50.777474   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-270995 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (2m15.579293296s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-270995 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-270995" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-270995
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-270995: (1.756899312s)
--- PASS: TestForceSystemdFlag (137.59s)

                                                
                                    
x
+
TestForceSystemdEnv (69.76s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-665218 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-665218 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m8.39545474s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-665218 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-665218" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-665218
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-665218: (1.116121023s)
--- PASS: TestForceSystemdEnv (69.76s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (10.37s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (10.37s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-165548 --log_dir /tmp/nospam-165548 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-165548 --log_dir /tmp/nospam-165548 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-165548 --log_dir /tmp/nospam-165548 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-165548 --log_dir /tmp/nospam-165548 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-165548 --log_dir /tmp/nospam-165548 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-165548 --log_dir /tmp/nospam-165548 status
--- PASS: TestErrorSpam/status (0.77s)

                                                
                                    
x
+
TestErrorSpam/pause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-165548 --log_dir /tmp/nospam-165548 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-165548 --log_dir /tmp/nospam-165548 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-165548 --log_dir /tmp/nospam-165548 pause
--- PASS: TestErrorSpam/pause (1.53s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-165548 --log_dir /tmp/nospam-165548 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-165548 --log_dir /tmp/nospam-165548 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-165548 --log_dir /tmp/nospam-165548 unpause
--- PASS: TestErrorSpam/unpause (1.65s)

                                                
                                    
x
+
TestErrorSpam/stop (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-165548 --log_dir /tmp/nospam-165548 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-165548 --log_dir /tmp/nospam-165548 stop: (1.367368056s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-165548 --log_dir /tmp/nospam-165548 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-165548 --log_dir /tmp/nospam-165548 stop
--- PASS: TestErrorSpam/stop (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17314-66550/.minikube/files/etc/test/nested/copy/73840/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (86.17s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-362209 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E1006 01:09:47.727365   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/client.crt: no such file or directory
E1006 01:09:47.733167   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/client.crt: no such file or directory
E1006 01:09:47.743462   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/client.crt: no such file or directory
E1006 01:09:47.763744   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/client.crt: no such file or directory
E1006 01:09:47.804040   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/client.crt: no such file or directory
E1006 01:09:47.884375   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/client.crt: no such file or directory
E1006 01:09:48.044821   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/client.crt: no such file or directory
E1006 01:09:48.365444   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/client.crt: no such file or directory
E1006 01:09:49.006457   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/client.crt: no such file or directory
E1006 01:09:50.286949   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/client.crt: no such file or directory
E1006 01:09:52.847396   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/client.crt: no such file or directory
E1006 01:09:57.967629   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/client.crt: no such file or directory
E1006 01:10:08.207934   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/client.crt: no such file or directory
E1006 01:10:28.688330   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-362209 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m26.167397954s)
--- PASS: TestFunctional/serial/StartWithProxy (86.17s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.8s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-362209 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-362209 --alsologtostderr -v=8: (5.803847234s)
functional_test.go:659: soft start took 5.804542337s for "functional-362209" cluster.
--- PASS: TestFunctional/serial/SoftStart (5.80s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-362209 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-362209 cache add registry.k8s.io/pause:3.1: (1.251399656s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-362209 cache add registry.k8s.io/pause:3.3: (1.341652224s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-362209 cache add registry.k8s.io/pause:latest: (1.294796288s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (3.76s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-362209 /tmp/TestFunctionalserialCacheCmdcacheadd_local3852779739/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 cache add minikube-local-cache-test:functional-362209
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-362209 cache add minikube-local-cache-test:functional-362209: (3.45143535s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 cache delete minikube-local-cache-test:functional-362209
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-362209
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (3.76s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-362209 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (230.086257ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-362209 cache reload: (1.348884359s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 kubectl -- --context functional-362209 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-362209 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.57s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-362209 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1006 01:11:09.650314   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-362209 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.574312444s)
functional_test.go:757: restart took 39.574441908s for "functional-362209" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.57s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-362209 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-362209 logs: (1.513648651s)
--- PASS: TestFunctional/serial/LogsCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 logs --file /tmp/TestFunctionalserialLogsFileCmd3736941204/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-362209 logs --file /tmp/TestFunctionalserialLogsFileCmd3736941204/001/logs.txt: (1.49969759s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.50s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.27s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-362209 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-362209
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-362209: exit status 115 (295.154642ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.84:32598 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-362209 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-362209 delete -f testdata/invalidsvc.yaml: (1.696203569s)
--- PASS: TestFunctional/serial/InvalidService (5.27s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-362209 config get cpus: exit status 14 (69.966673ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-362209 config get cpus: exit status 14 (61.962148ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (23.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-362209 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-362209 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 81109: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (23.39s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-362209 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-362209 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (179.744981ms)

                                                
                                                
-- stdout --
	* [functional-362209] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17314
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17314-66550/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-66550/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 01:12:01.356084   80970 out.go:296] Setting OutFile to fd 1 ...
	I1006 01:12:01.356221   80970 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 01:12:01.356232   80970 out.go:309] Setting ErrFile to fd 2...
	I1006 01:12:01.356239   80970 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 01:12:01.356532   80970 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-66550/.minikube/bin
	I1006 01:12:01.357291   80970 out.go:303] Setting JSON to false
	I1006 01:12:01.358572   80970 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":6865,"bootTime":1696547857,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 01:12:01.358663   80970 start.go:138] virtualization: kvm guest
	I1006 01:12:01.362476   80970 out.go:177] * [functional-362209] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1006 01:12:01.363883   80970 out.go:177]   - MINIKUBE_LOCATION=17314
	I1006 01:12:01.363907   80970 notify.go:220] Checking for updates...
	I1006 01:12:01.365370   80970 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 01:12:01.366874   80970 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17314-66550/kubeconfig
	I1006 01:12:01.368296   80970 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-66550/.minikube
	I1006 01:12:01.369881   80970 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 01:12:01.371487   80970 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 01:12:01.373660   80970 config.go:182] Loaded profile config "functional-362209": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
	I1006 01:12:01.374259   80970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:12:01.374375   80970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:12:01.389136   80970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41687
	I1006 01:12:01.389620   80970 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:12:01.390199   80970 main.go:141] libmachine: Using API Version  1
	I1006 01:12:01.390220   80970 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:12:01.390568   80970 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:12:01.390764   80970 main.go:141] libmachine: (functional-362209) Calling .DriverName
	I1006 01:12:01.391005   80970 driver.go:378] Setting default libvirt URI to qemu:///system
	I1006 01:12:01.391311   80970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:12:01.391358   80970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:12:01.407318   80970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45807
	I1006 01:12:01.407734   80970 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:12:01.408256   80970 main.go:141] libmachine: Using API Version  1
	I1006 01:12:01.408282   80970 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:12:01.408646   80970 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:12:01.408829   80970 main.go:141] libmachine: (functional-362209) Calling .DriverName
	I1006 01:12:01.442508   80970 out.go:177] * Using the kvm2 driver based on existing profile
	I1006 01:12:01.444152   80970 start.go:298] selected driver: kvm2
	I1006 01:12:01.444168   80970 start.go:902] validating driver "kvm2" against &{Name:functional-362209 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:functional-362209 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.84 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1006 01:12:01.444297   80970 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 01:12:01.446387   80970 out.go:177] 
	W1006 01:12:01.447735   80970 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1006 01:12:01.449299   80970 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-362209 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-362209 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-362209 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (157.975581ms)

                                                
                                                
-- stdout --
	* [functional-362209] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17314
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17314-66550/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-66550/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 01:12:01.662848   81024 out.go:296] Setting OutFile to fd 1 ...
	I1006 01:12:01.662965   81024 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 01:12:01.662975   81024 out.go:309] Setting ErrFile to fd 2...
	I1006 01:12:01.662982   81024 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 01:12:01.663298   81024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-66550/.minikube/bin
	I1006 01:12:01.663891   81024 out.go:303] Setting JSON to false
	I1006 01:12:01.664855   81024 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":6865,"bootTime":1696547857,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 01:12:01.664915   81024 start.go:138] virtualization: kvm guest
	I1006 01:12:01.666704   81024 out.go:177] * [functional-362209] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	I1006 01:12:01.668459   81024 out.go:177]   - MINIKUBE_LOCATION=17314
	I1006 01:12:01.670194   81024 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 01:12:01.668516   81024 notify.go:220] Checking for updates...
	I1006 01:12:01.673672   81024 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17314-66550/kubeconfig
	I1006 01:12:01.675280   81024 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-66550/.minikube
	I1006 01:12:01.676678   81024 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 01:12:01.678221   81024 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 01:12:01.680266   81024 config.go:182] Loaded profile config "functional-362209": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
	I1006 01:12:01.680641   81024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:12:01.680696   81024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:12:01.695486   81024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40827
	I1006 01:12:01.695926   81024 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:12:01.696642   81024 main.go:141] libmachine: Using API Version  1
	I1006 01:12:01.696676   81024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:12:01.697027   81024 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:12:01.697230   81024 main.go:141] libmachine: (functional-362209) Calling .DriverName
	I1006 01:12:01.697506   81024 driver.go:378] Setting default libvirt URI to qemu:///system
	I1006 01:12:01.697800   81024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:12:01.697855   81024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:12:01.714023   81024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38155
	I1006 01:12:01.714440   81024 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:12:01.714909   81024 main.go:141] libmachine: Using API Version  1
	I1006 01:12:01.714928   81024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:12:01.715240   81024 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:12:01.715456   81024 main.go:141] libmachine: (functional-362209) Calling .DriverName
	I1006 01:12:01.752668   81024 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1006 01:12:01.753960   81024 start.go:298] selected driver: kvm2
	I1006 01:12:01.753975   81024 start.go:902] validating driver "kvm2" against &{Name:functional-362209 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:functional-362209 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.84 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1006 01:12:01.754106   81024 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 01:12:01.756538   81024 out.go:177] 
	W1006 01:12:01.757759   81024 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1006 01:12:01.758984   81024 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-362209 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-362209 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-7lm4l" [a4fe17cd-e4aa-45ea-9471-7c29602405ed] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-7lm4l" [a4fe17cd-e4aa-45ea-9471-7c29602405ed] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.020186335s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.50.84:31104
functional_test.go:1674: http://192.168.50.84:31104: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-7lm4l

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.84:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.84:31104
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (53.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [76443d91-64fc-4779-a8e4-790090af5c51] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.012235955s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-362209 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-362209 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-362209 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-362209 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-362209 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c0176da4-de34-43a2-9a6d-aacb962ce0d7] Pending
helpers_test.go:344: "sp-pod" [c0176da4-de34-43a2-9a6d-aacb962ce0d7] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c0176da4-de34-43a2-9a6d-aacb962ce0d7] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 28.020863453s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-362209 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-362209 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-362209 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3f4d8445-1e50-4b50-89d4-28ae99aa6aef] Pending
helpers_test.go:344: "sp-pod" [3f4d8445-1e50-4b50-89d4-28ae99aa6aef] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3f4d8445-1e50-4b50-89d4-28ae99aa6aef] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.0182467s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-362209 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (53.58s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 ssh -n functional-362209 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 cp functional-362209:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2625640919/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 ssh -n functional-362209 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (32.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-362209 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-pg824" [7637098d-f0f0-4557-bd55-6d2df10c3d47] Pending
helpers_test.go:344: "mysql-859648c796-pg824" [7637098d-f0f0-4557-bd55-6d2df10c3d47] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-pg824" [7637098d-f0f0-4557-bd55-6d2df10c3d47] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 28.071030275s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-362209 exec mysql-859648c796-pg824 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-362209 exec mysql-859648c796-pg824 -- mysql -ppassword -e "show databases;": exit status 1 (438.961653ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-362209 exec mysql-859648c796-pg824 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-362209 exec mysql-859648c796-pg824 -- mysql -ppassword -e "show databases;": exit status 1 (253.969002ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-362209 exec mysql-859648c796-pg824 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-362209 exec mysql-859648c796-pg824 -- mysql -ppassword -e "show databases;": exit status 1 (131.33869ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-362209 exec mysql-859648c796-pg824 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (32.85s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/73840/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 ssh "sudo cat /etc/test/nested/copy/73840/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/73840.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 ssh "sudo cat /etc/ssl/certs/73840.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/73840.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 ssh "sudo cat /usr/share/ca-certificates/73840.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/738402.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 ssh "sudo cat /etc/ssl/certs/738402.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/738402.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 ssh "sudo cat /usr/share/ca-certificates/738402.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-362209 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-362209 ssh "sudo systemctl is-active docker": exit status 1 (227.32689ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-362209 ssh "sudo systemctl is-active crio": exit status 1 (234.770011ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-362209 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-362209
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-362209
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-362209 image ls --format short --alsologtostderr:
I1006 01:12:09.353263   81197 out.go:296] Setting OutFile to fd 1 ...
I1006 01:12:09.353416   81197 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1006 01:12:09.353427   81197 out.go:309] Setting ErrFile to fd 2...
I1006 01:12:09.353435   81197 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1006 01:12:09.353646   81197 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-66550/.minikube/bin
I1006 01:12:09.354208   81197 config.go:182] Loaded profile config "functional-362209": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
I1006 01:12:09.354408   81197 config.go:182] Loaded profile config "functional-362209": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
I1006 01:12:09.354805   81197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1006 01:12:09.354864   81197 main.go:141] libmachine: Launching plugin server for driver kvm2
I1006 01:12:09.368894   81197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38265
I1006 01:12:09.369304   81197 main.go:141] libmachine: () Calling .GetVersion
I1006 01:12:09.369905   81197 main.go:141] libmachine: Using API Version  1
I1006 01:12:09.369941   81197 main.go:141] libmachine: () Calling .SetConfigRaw
I1006 01:12:09.370276   81197 main.go:141] libmachine: () Calling .GetMachineName
I1006 01:12:09.370459   81197 main.go:141] libmachine: (functional-362209) Calling .GetState
I1006 01:12:09.372067   81197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1006 01:12:09.372102   81197 main.go:141] libmachine: Launching plugin server for driver kvm2
I1006 01:12:09.386692   81197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39425
I1006 01:12:09.387029   81197 main.go:141] libmachine: () Calling .GetVersion
I1006 01:12:09.387436   81197 main.go:141] libmachine: Using API Version  1
I1006 01:12:09.387459   81197 main.go:141] libmachine: () Calling .SetConfigRaw
I1006 01:12:09.387772   81197 main.go:141] libmachine: () Calling .GetMachineName
I1006 01:12:09.387919   81197 main.go:141] libmachine: (functional-362209) Calling .DriverName
I1006 01:12:09.388128   81197 ssh_runner.go:195] Run: systemctl --version
I1006 01:12:09.388152   81197 main.go:141] libmachine: (functional-362209) Calling .GetSSHHostname
I1006 01:12:09.390848   81197 main.go:141] libmachine: (functional-362209) DBG | domain functional-362209 has defined MAC address 52:54:00:68:86:6a in network mk-functional-362209
I1006 01:12:09.391284   81197 main.go:141] libmachine: (functional-362209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:86:6a", ip: ""} in network mk-functional-362209: {Iface:virbr1 ExpiryTime:2023-10-06 02:09:20 +0000 UTC Type:0 Mac:52:54:00:68:86:6a Iaid: IPaddr:192.168.50.84 Prefix:24 Hostname:functional-362209 Clientid:01:52:54:00:68:86:6a}
I1006 01:12:09.391308   81197 main.go:141] libmachine: (functional-362209) DBG | domain functional-362209 has defined IP address 192.168.50.84 and MAC address 52:54:00:68:86:6a in network mk-functional-362209
I1006 01:12:09.391457   81197 main.go:141] libmachine: (functional-362209) Calling .GetSSHPort
I1006 01:12:09.391619   81197 main.go:141] libmachine: (functional-362209) Calling .GetSSHKeyPath
I1006 01:12:09.391735   81197 main.go:141] libmachine: (functional-362209) Calling .GetSSHUsername
I1006 01:12:09.391857   81197 sshutil.go:53] new ssh client: &{IP:192.168.50.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-66550/.minikube/machines/functional-362209/id_rsa Username:docker}
I1006 01:12:09.487260   81197 ssh_runner.go:195] Run: sudo crictl images --output json
I1006 01:12:09.535461   81197 main.go:141] libmachine: Making call to close driver server
I1006 01:12:09.535479   81197 main.go:141] libmachine: (functional-362209) Calling .Close
I1006 01:12:09.535765   81197 main.go:141] libmachine: Successfully made call to close driver server
I1006 01:12:09.535829   81197 main.go:141] libmachine: Making call to close connection to plugin binary
I1006 01:12:09.535846   81197 main.go:141] libmachine: Making call to close driver server
I1006 01:12:09.535856   81197 main.go:141] libmachine: (functional-362209) Calling .Close
I1006 01:12:09.535807   81197 main.go:141] libmachine: (functional-362209) DBG | Closing plugin on server side
I1006 01:12:09.536078   81197 main.go:141] libmachine: Successfully made call to close driver server
I1006 01:12:09.536093   81197 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-362209 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager     | v1.28.2            | sha256:55f13c | 33.4MB |
| registry.k8s.io/kube-scheduler              | v1.28.2            | sha256:7a5d9d | 18.8MB |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
| docker.io/library/mysql                     | 5.7                | sha256:a5b7ce | 170MB  |
| docker.io/library/nginx                     | latest             | sha256:61395b | 70.5MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| localhost/my-image                          | functional-362209  | sha256:2dfff1 | 775kB  |
| docker.io/kindest/kindnetd                  | v20230809-80a64d96 | sha256:c7d129 | 27.7MB |
| gcr.io/google-containers/addon-resizer      | functional-362209  | sha256:ffd4cf | 10.8MB |
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:73deb9 | 103MB  |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:ead0a4 | 16.2MB |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| docker.io/library/minikube-local-cache-test | functional-362209  | sha256:85744a | 1.01kB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| registry.k8s.io/kube-apiserver              | v1.28.2            | sha256:cdcab1 | 34.7MB |
| registry.k8s.io/kube-proxy                  | v1.28.2            | sha256:c120fe | 24.6MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-362209 image ls --format table --alsologtostderr:
I1006 01:12:15.727139   81724 out.go:296] Setting OutFile to fd 1 ...
I1006 01:12:15.727279   81724 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1006 01:12:15.727290   81724 out.go:309] Setting ErrFile to fd 2...
I1006 01:12:15.727296   81724 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1006 01:12:15.727569   81724 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-66550/.minikube/bin
I1006 01:12:15.728272   81724 config.go:182] Loaded profile config "functional-362209": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
I1006 01:12:15.728405   81724 config.go:182] Loaded profile config "functional-362209": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
I1006 01:12:15.728799   81724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1006 01:12:15.728851   81724 main.go:141] libmachine: Launching plugin server for driver kvm2
I1006 01:12:15.743489   81724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33263
I1006 01:12:15.743889   81724 main.go:141] libmachine: () Calling .GetVersion
I1006 01:12:15.744528   81724 main.go:141] libmachine: Using API Version  1
I1006 01:12:15.744562   81724 main.go:141] libmachine: () Calling .SetConfigRaw
I1006 01:12:15.744924   81724 main.go:141] libmachine: () Calling .GetMachineName
I1006 01:12:15.745184   81724 main.go:141] libmachine: (functional-362209) Calling .GetState
I1006 01:12:15.747090   81724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1006 01:12:15.747135   81724 main.go:141] libmachine: Launching plugin server for driver kvm2
I1006 01:12:15.760939   81724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43051
I1006 01:12:15.761346   81724 main.go:141] libmachine: () Calling .GetVersion
I1006 01:12:15.761813   81724 main.go:141] libmachine: Using API Version  1
I1006 01:12:15.761836   81724 main.go:141] libmachine: () Calling .SetConfigRaw
I1006 01:12:15.762124   81724 main.go:141] libmachine: () Calling .GetMachineName
I1006 01:12:15.762317   81724 main.go:141] libmachine: (functional-362209) Calling .DriverName
I1006 01:12:15.762534   81724 ssh_runner.go:195] Run: systemctl --version
I1006 01:12:15.762574   81724 main.go:141] libmachine: (functional-362209) Calling .GetSSHHostname
I1006 01:12:15.765351   81724 main.go:141] libmachine: (functional-362209) DBG | domain functional-362209 has defined MAC address 52:54:00:68:86:6a in network mk-functional-362209
I1006 01:12:15.765769   81724 main.go:141] libmachine: (functional-362209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:86:6a", ip: ""} in network mk-functional-362209: {Iface:virbr1 ExpiryTime:2023-10-06 02:09:20 +0000 UTC Type:0 Mac:52:54:00:68:86:6a Iaid: IPaddr:192.168.50.84 Prefix:24 Hostname:functional-362209 Clientid:01:52:54:00:68:86:6a}
I1006 01:12:15.765811   81724 main.go:141] libmachine: (functional-362209) DBG | domain functional-362209 has defined IP address 192.168.50.84 and MAC address 52:54:00:68:86:6a in network mk-functional-362209
I1006 01:12:15.765907   81724 main.go:141] libmachine: (functional-362209) Calling .GetSSHPort
I1006 01:12:15.766099   81724 main.go:141] libmachine: (functional-362209) Calling .GetSSHKeyPath
I1006 01:12:15.766263   81724 main.go:141] libmachine: (functional-362209) Calling .GetSSHUsername
I1006 01:12:15.766460   81724 sshutil.go:53] new ssh client: &{IP:192.168.50.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-66550/.minikube/machines/functional-362209/id_rsa Username:docker}
I1006 01:12:15.884984   81724 ssh_runner.go:195] Run: sudo crictl images --output json
I1006 01:12:15.934943   81724 main.go:141] libmachine: Making call to close driver server
I1006 01:12:15.934959   81724 main.go:141] libmachine: (functional-362209) Calling .Close
I1006 01:12:15.935268   81724 main.go:141] libmachine: Successfully made call to close driver server
I1006 01:12:15.935290   81724 main.go:141] libmachine: Making call to close connection to plugin binary
I1006 01:12:15.935301   81724 main.go:141] libmachine: Making call to close driver server
I1006 01:12:15.935311   81724 main.go:141] libmachine: (functional-362209) Calling .Close
I1006 01:12:15.935602   81724 main.go:141] libmachine: (functional-362209) DBG | Closing plugin on server side
I1006 01:12:15.935620   81724 main.go:141] libmachine: Successfully made call to close driver server
I1006 01:12:15.935645   81724 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-362209 image ls --format json --alsologtostderr:
[{"id":"sha256:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"27737299"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"102894559"},{"id":"sha256:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce","repoDigests":["registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736
c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.2"],"size":"34662976"},{"id":"sha256:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.2"],"size":"33395782"},{"id":"sha256:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0","repoDigests":["registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.2"],"size":"24558871"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"
size":"321520"},{"id":"sha256:85744a824f8226216dab3fa964fb85ebaa258393689b0cabbdd23fb5a9b308c7","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-362209"],"size":"1008"},{"id":"sha256:a5b7ceed4074932a04ea553af3124bb03b249affe14899e2cd746d1a63e12ecc","repoDigests":["docker.io/library/mysql@sha256:a06310bb26d02a6118ae7fa825c172a0bf594e178c72230fc31674f348033270"],"repoTags":["docker.io/library/mysql:5.7"],"size":"170206062"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-362209"],"size":"10823156"},{"id":"sha256:2dfff1d57fa9ea95b9e706a295cf93309201816aee5f452f1d0cb13a982122fd","repoDigests":[],"repoTags":["localhost/my-image:functional-362209"],"size":"774902"},{"id":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoT
ags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"16190758"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.2"],"size":"18811134"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"31
5399"},{"id":"sha256:61395b4c586da2b9b3b7ca903ea6a448e6783dfdd7f768ff2c1a0f3360aaba99","repoDigests":["docker.io/library/nginx@sha256:32da30332506740a2f7c34d5dc70467b7f14ec67d912703568daff790ab3f755"],"repoTags":["docker.io/library/nginx:latest"],"size":"70481054"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-362209 image ls --format json --alsologtostderr:
I1006 01:12:15.312581   81603 out.go:296] Setting OutFile to fd 1 ...
I1006 01:12:15.312685   81603 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1006 01:12:15.312693   81603 out.go:309] Setting ErrFile to fd 2...
I1006 01:12:15.312697   81603 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1006 01:12:15.312901   81603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-66550/.minikube/bin
I1006 01:12:15.313534   81603 config.go:182] Loaded profile config "functional-362209": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
I1006 01:12:15.313632   81603 config.go:182] Loaded profile config "functional-362209": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
I1006 01:12:15.313988   81603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1006 01:12:15.314052   81603 main.go:141] libmachine: Launching plugin server for driver kvm2
I1006 01:12:15.328140   81603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33149
I1006 01:12:15.328632   81603 main.go:141] libmachine: () Calling .GetVersion
I1006 01:12:15.329238   81603 main.go:141] libmachine: Using API Version  1
I1006 01:12:15.329259   81603 main.go:141] libmachine: () Calling .SetConfigRaw
I1006 01:12:15.329661   81603 main.go:141] libmachine: () Calling .GetMachineName
I1006 01:12:15.329869   81603 main.go:141] libmachine: (functional-362209) Calling .GetState
I1006 01:12:15.331739   81603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1006 01:12:15.331790   81603 main.go:141] libmachine: Launching plugin server for driver kvm2
I1006 01:12:15.345561   81603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34393
I1006 01:12:15.345983   81603 main.go:141] libmachine: () Calling .GetVersion
I1006 01:12:15.346448   81603 main.go:141] libmachine: Using API Version  1
I1006 01:12:15.346470   81603 main.go:141] libmachine: () Calling .SetConfigRaw
I1006 01:12:15.346944   81603 main.go:141] libmachine: () Calling .GetMachineName
I1006 01:12:15.347126   81603 main.go:141] libmachine: (functional-362209) Calling .DriverName
I1006 01:12:15.347330   81603 ssh_runner.go:195] Run: systemctl --version
I1006 01:12:15.347360   81603 main.go:141] libmachine: (functional-362209) Calling .GetSSHHostname
I1006 01:12:15.349997   81603 main.go:141] libmachine: (functional-362209) DBG | domain functional-362209 has defined MAC address 52:54:00:68:86:6a in network mk-functional-362209
I1006 01:12:15.350459   81603 main.go:141] libmachine: (functional-362209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:86:6a", ip: ""} in network mk-functional-362209: {Iface:virbr1 ExpiryTime:2023-10-06 02:09:20 +0000 UTC Type:0 Mac:52:54:00:68:86:6a Iaid: IPaddr:192.168.50.84 Prefix:24 Hostname:functional-362209 Clientid:01:52:54:00:68:86:6a}
I1006 01:12:15.350508   81603 main.go:141] libmachine: (functional-362209) DBG | domain functional-362209 has defined IP address 192.168.50.84 and MAC address 52:54:00:68:86:6a in network mk-functional-362209
I1006 01:12:15.350638   81603 main.go:141] libmachine: (functional-362209) Calling .GetSSHPort
I1006 01:12:15.350805   81603 main.go:141] libmachine: (functional-362209) Calling .GetSSHKeyPath
I1006 01:12:15.350956   81603 main.go:141] libmachine: (functional-362209) Calling .GetSSHUsername
I1006 01:12:15.351130   81603 sshutil.go:53] new ssh client: &{IP:192.168.50.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-66550/.minikube/machines/functional-362209/id_rsa Username:docker}
I1006 01:12:15.475269   81603 ssh_runner.go:195] Run: sudo crictl images --output json
I1006 01:12:15.659936   81603 main.go:141] libmachine: Making call to close driver server
I1006 01:12:15.659958   81603 main.go:141] libmachine: (functional-362209) Calling .Close
I1006 01:12:15.660248   81603 main.go:141] libmachine: Successfully made call to close driver server
I1006 01:12:15.660275   81603 main.go:141] libmachine: Making call to close connection to plugin binary
I1006 01:12:15.660287   81603 main.go:141] libmachine: Making call to close driver server
I1006 01:12:15.660298   81603 main.go:141] libmachine: (functional-362209) Calling .Close
I1006 01:12:15.660561   81603 main.go:141] libmachine: (functional-362209) DBG | Closing plugin on server side
I1006 01:12:15.660622   81603 main.go:141] libmachine: Successfully made call to close driver server
I1006 01:12:15.660653   81603 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-362209 image ls --format yaml --alsologtostderr:
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:a5b7ceed4074932a04ea553af3124bb03b249affe14899e2cd746d1a63e12ecc
repoDigests:
- docker.io/library/mysql@sha256:a06310bb26d02a6118ae7fa825c172a0bf594e178c72230fc31674f348033270
repoTags:
- docker.io/library/mysql:5.7
size: "170206062"
- id: sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "16190758"
- id: sha256:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.2
size: "33395782"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:61395b4c586da2b9b3b7ca903ea6a448e6783dfdd7f768ff2c1a0f3360aaba99
repoDigests:
- docker.io/library/nginx@sha256:32da30332506740a2f7c34d5dc70467b7f14ec67d912703568daff790ab3f755
repoTags:
- docker.io/library/nginx:latest
size: "70481054"
- id: sha256:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0
repoDigests:
- registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf
repoTags:
- registry.k8s.io/kube-proxy:v1.28.2
size: "24558871"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.2
size: "18811134"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "102894559"
- id: sha256:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.2
size: "34662976"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "27737299"
- id: sha256:85744a824f8226216dab3fa964fb85ebaa258393689b0cabbdd23fb5a9b308c7
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-362209
size: "1008"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-362209
size: "10823156"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-362209 image ls --format yaml --alsologtostderr:
I1006 01:12:09.599662   81221 out.go:296] Setting OutFile to fd 1 ...
I1006 01:12:09.599903   81221 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1006 01:12:09.599912   81221 out.go:309] Setting ErrFile to fd 2...
I1006 01:12:09.599919   81221 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1006 01:12:09.600125   81221 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-66550/.minikube/bin
I1006 01:12:09.600701   81221 config.go:182] Loaded profile config "functional-362209": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
I1006 01:12:09.600834   81221 config.go:182] Loaded profile config "functional-362209": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
I1006 01:12:09.601212   81221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1006 01:12:09.601291   81221 main.go:141] libmachine: Launching plugin server for driver kvm2
I1006 01:12:09.615630   81221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40215
I1006 01:12:09.616085   81221 main.go:141] libmachine: () Calling .GetVersion
I1006 01:12:09.616634   81221 main.go:141] libmachine: Using API Version  1
I1006 01:12:09.616658   81221 main.go:141] libmachine: () Calling .SetConfigRaw
I1006 01:12:09.617037   81221 main.go:141] libmachine: () Calling .GetMachineName
I1006 01:12:09.617233   81221 main.go:141] libmachine: (functional-362209) Calling .GetState
I1006 01:12:09.619033   81221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1006 01:12:09.619068   81221 main.go:141] libmachine: Launching plugin server for driver kvm2
I1006 01:12:09.634537   81221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44885
I1006 01:12:09.634894   81221 main.go:141] libmachine: () Calling .GetVersion
I1006 01:12:09.635313   81221 main.go:141] libmachine: Using API Version  1
I1006 01:12:09.635342   81221 main.go:141] libmachine: () Calling .SetConfigRaw
I1006 01:12:09.635593   81221 main.go:141] libmachine: () Calling .GetMachineName
I1006 01:12:09.635767   81221 main.go:141] libmachine: (functional-362209) Calling .DriverName
I1006 01:12:09.635943   81221 ssh_runner.go:195] Run: systemctl --version
I1006 01:12:09.635976   81221 main.go:141] libmachine: (functional-362209) Calling .GetSSHHostname
I1006 01:12:09.638498   81221 main.go:141] libmachine: (functional-362209) DBG | domain functional-362209 has defined MAC address 52:54:00:68:86:6a in network mk-functional-362209
I1006 01:12:09.638945   81221 main.go:141] libmachine: (functional-362209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:86:6a", ip: ""} in network mk-functional-362209: {Iface:virbr1 ExpiryTime:2023-10-06 02:09:20 +0000 UTC Type:0 Mac:52:54:00:68:86:6a Iaid: IPaddr:192.168.50.84 Prefix:24 Hostname:functional-362209 Clientid:01:52:54:00:68:86:6a}
I1006 01:12:09.638973   81221 main.go:141] libmachine: (functional-362209) DBG | domain functional-362209 has defined IP address 192.168.50.84 and MAC address 52:54:00:68:86:6a in network mk-functional-362209
I1006 01:12:09.639116   81221 main.go:141] libmachine: (functional-362209) Calling .GetSSHPort
I1006 01:12:09.639276   81221 main.go:141] libmachine: (functional-362209) Calling .GetSSHKeyPath
I1006 01:12:09.639424   81221 main.go:141] libmachine: (functional-362209) Calling .GetSSHUsername
I1006 01:12:09.639562   81221 sshutil.go:53] new ssh client: &{IP:192.168.50.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-66550/.minikube/machines/functional-362209/id_rsa Username:docker}
I1006 01:12:09.733806   81221 ssh_runner.go:195] Run: sudo crictl images --output json
I1006 01:12:09.778558   81221 main.go:141] libmachine: Making call to close driver server
I1006 01:12:09.778585   81221 main.go:141] libmachine: (functional-362209) Calling .Close
I1006 01:12:09.778867   81221 main.go:141] libmachine: Successfully made call to close driver server
I1006 01:12:09.778888   81221 main.go:141] libmachine: Making call to close connection to plugin binary
I1006 01:12:09.778900   81221 main.go:141] libmachine: Making call to close driver server
I1006 01:12:09.778902   81221 main.go:141] libmachine: (functional-362209) DBG | Closing plugin on server side
I1006 01:12:09.778909   81221 main.go:141] libmachine: (functional-362209) Calling .Close
I1006 01:12:09.779178   81221 main.go:141] libmachine: Successfully made call to close driver server
I1006 01:12:09.779194   81221 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-362209 ssh pgrep buildkitd: exit status 1 (217.726577ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 image build -t localhost/my-image:functional-362209 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-362209 image build -t localhost/my-image:functional-362209 testdata/build --alsologtostderr: (4.915204223s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-362209 image build -t localhost/my-image:functional-362209 testdata/build --alsologtostderr:
I1006 01:12:10.057586   81274 out.go:296] Setting OutFile to fd 1 ...
I1006 01:12:10.057746   81274 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1006 01:12:10.057758   81274 out.go:309] Setting ErrFile to fd 2...
I1006 01:12:10.057765   81274 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1006 01:12:10.057977   81274 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-66550/.minikube/bin
I1006 01:12:10.058624   81274 config.go:182] Loaded profile config "functional-362209": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
I1006 01:12:10.059342   81274 config.go:182] Loaded profile config "functional-362209": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
I1006 01:12:10.059764   81274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1006 01:12:10.059838   81274 main.go:141] libmachine: Launching plugin server for driver kvm2
I1006 01:12:10.074475   81274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33409
I1006 01:12:10.075014   81274 main.go:141] libmachine: () Calling .GetVersion
I1006 01:12:10.075670   81274 main.go:141] libmachine: Using API Version  1
I1006 01:12:10.075699   81274 main.go:141] libmachine: () Calling .SetConfigRaw
I1006 01:12:10.076060   81274 main.go:141] libmachine: () Calling .GetMachineName
I1006 01:12:10.076275   81274 main.go:141] libmachine: (functional-362209) Calling .GetState
I1006 01:12:10.078303   81274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1006 01:12:10.078369   81274 main.go:141] libmachine: Launching plugin server for driver kvm2
I1006 01:12:10.092548   81274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34525
I1006 01:12:10.093013   81274 main.go:141] libmachine: () Calling .GetVersion
I1006 01:12:10.093455   81274 main.go:141] libmachine: Using API Version  1
I1006 01:12:10.093474   81274 main.go:141] libmachine: () Calling .SetConfigRaw
I1006 01:12:10.093847   81274 main.go:141] libmachine: () Calling .GetMachineName
I1006 01:12:10.094083   81274 main.go:141] libmachine: (functional-362209) Calling .DriverName
I1006 01:12:10.094294   81274 ssh_runner.go:195] Run: systemctl --version
I1006 01:12:10.094341   81274 main.go:141] libmachine: (functional-362209) Calling .GetSSHHostname
I1006 01:12:10.097181   81274 main.go:141] libmachine: (functional-362209) DBG | domain functional-362209 has defined MAC address 52:54:00:68:86:6a in network mk-functional-362209
I1006 01:12:10.097621   81274 main.go:141] libmachine: (functional-362209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:86:6a", ip: ""} in network mk-functional-362209: {Iface:virbr1 ExpiryTime:2023-10-06 02:09:20 +0000 UTC Type:0 Mac:52:54:00:68:86:6a Iaid: IPaddr:192.168.50.84 Prefix:24 Hostname:functional-362209 Clientid:01:52:54:00:68:86:6a}
I1006 01:12:10.097653   81274 main.go:141] libmachine: (functional-362209) DBG | domain functional-362209 has defined IP address 192.168.50.84 and MAC address 52:54:00:68:86:6a in network mk-functional-362209
I1006 01:12:10.097837   81274 main.go:141] libmachine: (functional-362209) Calling .GetSSHPort
I1006 01:12:10.098029   81274 main.go:141] libmachine: (functional-362209) Calling .GetSSHKeyPath
I1006 01:12:10.098204   81274 main.go:141] libmachine: (functional-362209) Calling .GetSSHUsername
I1006 01:12:10.098345   81274 sshutil.go:53] new ssh client: &{IP:192.168.50.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-66550/.minikube/machines/functional-362209/id_rsa Username:docker}
I1006 01:12:10.204885   81274 build_images.go:151] Building image from path: /tmp/build.1426941594.tar
I1006 01:12:10.204979   81274 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1006 01:12:10.216394   81274 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1426941594.tar
I1006 01:12:10.221892   81274 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1426941594.tar: stat -c "%s %y" /var/lib/minikube/build/build.1426941594.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1426941594.tar': No such file or directory
I1006 01:12:10.221937   81274 ssh_runner.go:362] scp /tmp/build.1426941594.tar --> /var/lib/minikube/build/build.1426941594.tar (3072 bytes)
I1006 01:12:10.258066   81274 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1426941594
I1006 01:12:10.270011   81274 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1426941594 -xf /var/lib/minikube/build/build.1426941594.tar
I1006 01:12:10.284013   81274 containerd.go:378] Building image: /var/lib/minikube/build/build.1426941594
I1006 01:12:10.284083   81274 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1426941594 --local dockerfile=/var/lib/minikube/build/build.1426941594 --output type=image,name=localhost/my-image:functional-362209
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile:
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.0s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 1.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:9e4978a345e84d23cb3cef9c58057a9a45b3c17408e5cdd2fee299d3c17e1223 0.1s done
#8 exporting config sha256:2dfff1d57fa9ea95b9e706a295cf93309201816aee5f452f1d0cb13a982122fd 0.0s done
#8 naming to localhost/my-image:functional-362209
#8 naming to localhost/my-image:functional-362209 0.0s done
#8 DONE 0.3s
I1006 01:12:14.876588   81274 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1426941594 --local dockerfile=/var/lib/minikube/build/build.1426941594 --output type=image,name=localhost/my-image:functional-362209: (4.592466082s)
I1006 01:12:14.876693   81274 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1426941594
I1006 01:12:14.891923   81274 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1426941594.tar
I1006 01:12:14.902729   81274 build_images.go:207] Built localhost/my-image:functional-362209 from /tmp/build.1426941594.tar
I1006 01:12:14.902767   81274 build_images.go:123] succeeded building to: functional-362209
I1006 01:12:14.902777   81274 build_images.go:124] failed building to: 
I1006 01:12:14.902810   81274 main.go:141] libmachine: Making call to close driver server
I1006 01:12:14.902824   81274 main.go:141] libmachine: (functional-362209) Calling .Close
I1006 01:12:14.903153   81274 main.go:141] libmachine: (functional-362209) DBG | Closing plugin on server side
I1006 01:12:14.903170   81274 main.go:141] libmachine: Successfully made call to close driver server
I1006 01:12:14.903183   81274 main.go:141] libmachine: Making call to close connection to plugin binary
I1006 01:12:14.903193   81274 main.go:141] libmachine: Making call to close driver server
I1006 01:12:14.903211   81274 main.go:141] libmachine: (functional-362209) Calling .Close
I1006 01:12:14.903425   81274 main.go:141] libmachine: Successfully made call to close driver server
I1006 01:12:14.903457   81274 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.597311091s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-362209
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-362209 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-362209 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-6psvz" [bf0a9e23-6276-4a8f-bdf4-9f983d5c9a76] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-6psvz" [bf0a9e23-6276-4a8f-bdf4-9f983d5c9a76] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.01302745s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 image load --daemon gcr.io/google-containers/addon-resizer:functional-362209 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-362209 image load --daemon gcr.io/google-containers/addon-resizer:functional-362209 --alsologtostderr: (3.702368072s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 image load --daemon gcr.io/google-containers/addon-resizer:functional-362209 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-362209 image load --daemon gcr.io/google-containers/addon-resizer:functional-362209 --alsologtostderr: (5.598756911s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.85s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 service list -o json
functional_test.go:1493: Took "275.998618ms" to run "out/minikube-linux-amd64 -p functional-362209 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.50.84:30318
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.647495789s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-362209
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 image load --daemon gcr.io/google-containers/addon-resizer:functional-362209 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-362209 image load --daemon gcr.io/google-containers/addon-resizer:functional-362209 --alsologtostderr: (4.351804026s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.50.84:30318
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 image save gcr.io/google-containers/addon-resizer:functional-362209 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-362209 image save gcr.io/google-containers/addon-resizer:functional-362209 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.098871674s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 image rm gcr.io/google-containers/addon-resizer:functional-362209 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-362209 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.982910048s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "252.716157ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "69.317845ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "246.246859ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "70.242058ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (13.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-362209 /tmp/TestFunctionalparallelMountCmdany-port4145999791/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1696554719694849121" to /tmp/TestFunctionalparallelMountCmdany-port4145999791/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1696554719694849121" to /tmp/TestFunctionalparallelMountCmdany-port4145999791/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1696554719694849121" to /tmp/TestFunctionalparallelMountCmdany-port4145999791/001/test-1696554719694849121
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-362209 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (238.961728ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  6 01:11 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  6 01:11 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  6 01:11 test-1696554719694849121
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 ssh cat /mount-9p/test-1696554719694849121
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-362209 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [a8e3760f-9bde-48df-b4b9-b419a5cd9c08] Pending
helpers_test.go:344: "busybox-mount" [a8e3760f-9bde-48df-b4b9-b419a5cd9c08] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [a8e3760f-9bde-48df-b4b9-b419a5cd9c08] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [a8e3760f-9bde-48df-b4b9-b419a5cd9c08] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 11.032364943s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-362209 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-362209 /tmp/TestFunctionalparallelMountCmdany-port4145999791/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (13.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-362209
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 image save --daemon gcr.io/google-containers/addon-resizer:functional-362209 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-362209 image save --daemon gcr.io/google-containers/addon-resizer:functional-362209 --alsologtostderr: (1.411103809s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-362209
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-362209 /tmp/TestFunctionalparallelMountCmdspecific-port2546445363/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-362209 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (276.995809ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-362209 /tmp/TestFunctionalparallelMountCmdspecific-port2546445363/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-362209 ssh "sudo umount -f /mount-9p": exit status 1 (281.506251ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-362209 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-362209 /tmp/TestFunctionalparallelMountCmdspecific-port2546445363/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-362209 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3830683394/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-362209 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3830683394/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-362209 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3830683394/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-362209 ssh "findmnt -T" /mount1: exit status 1 (331.942826ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-362209 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-362209 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-362209 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3830683394/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-362209 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3830683394/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-362209 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3830683394/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
2023/10/06 01:12:24 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.56s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-362209
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-362209
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-362209
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (88.16s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-989525 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-989525 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m28.155138138s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (88.16s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.97s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-989525 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-989525 addons enable ingress --alsologtostderr -v=5: (14.974400672s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.97s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.58s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-989525 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.58s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (39.67s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-989525 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-989525 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (9.962826799s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-989525 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-989525 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [799f126a-675e-4603-bff8-d3d8ef9c89d8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [799f126a-675e-4603-bff8-d3d8ef9c89d8] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 12.040363159s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-989525 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-989525 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-989525 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.16
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-989525 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-989525 addons disable ingress-dns --alsologtostderr -v=1: (8.852949249s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-989525 addons disable ingress --alsologtostderr -v=1
E1006 01:14:47.726646   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/client.crt: no such file or directory
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-989525 addons disable ingress --alsologtostderr -v=1: (7.577808228s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (39.67s)

                                                
                                    
x
+
TestJSONOutput/start/Command (114.72s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-596832 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
E1006 01:15:15.413621   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/client.crt: no such file or directory
E1006 01:16:36.096697   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/functional-362209/client.crt: no such file or directory
E1006 01:16:36.102017   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/functional-362209/client.crt: no such file or directory
E1006 01:16:36.112268   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/functional-362209/client.crt: no such file or directory
E1006 01:16:36.132539   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/functional-362209/client.crt: no such file or directory
E1006 01:16:36.172810   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/functional-362209/client.crt: no such file or directory
E1006 01:16:36.253115   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/functional-362209/client.crt: no such file or directory
E1006 01:16:36.413610   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/functional-362209/client.crt: no such file or directory
E1006 01:16:36.734242   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/functional-362209/client.crt: no such file or directory
E1006 01:16:37.375157   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/functional-362209/client.crt: no such file or directory
E1006 01:16:38.655627   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/functional-362209/client.crt: no such file or directory
E1006 01:16:41.217386   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/functional-362209/client.crt: no such file or directory
E1006 01:16:46.338186   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/functional-362209/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-596832 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m54.723333449s)
--- PASS: TestJSONOutput/start/Command (114.72s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-596832 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-596832 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (2.1s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-596832 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-596832 --output=json --user=testUser: (2.095396916s)
--- PASS: TestJSONOutput/stop/Command (2.10s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-539983 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-539983 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (79.518095ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2d920888-b6b9-4d3a-90a3-060227362884","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-539983] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"cb99a517-9c7a-4f98-bbf8-d7021edaea6f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17314"}}
	{"specversion":"1.0","id":"c9b58224-8a95-452c-a2e8-f9c28187979e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"88b7b0a5-1312-4785-8021-1cc359c7e02a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17314-66550/kubeconfig"}}
	{"specversion":"1.0","id":"ecee045d-ba79-42ff-a69b-c0ead210487a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-66550/.minikube"}}
	{"specversion":"1.0","id":"da67397a-02c2-4c28-ba55-ed36e2b91573","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"2ba8785f-b6ba-44b9-be2d-ad5cc0b2608f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8ce52a17-7ec7-44ee-8908-6b09b546b08f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-539983" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-539983
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (129.68s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-412836 --driver=kvm2  --container-runtime=containerd
E1006 01:16:56.578421   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/functional-362209/client.crt: no such file or directory
E1006 01:17:17.059545   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/functional-362209/client.crt: no such file or directory
E1006 01:17:58.020330   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/functional-362209/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-412836 --driver=kvm2  --container-runtime=containerd: (1m3.892558132s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-416126 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-416126 --driver=kvm2  --container-runtime=containerd: (1m3.064724228s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-412836
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-416126
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-416126" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-416126
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-416126: (1.009713995s)
helpers_test.go:175: Cleaning up "first-412836" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-412836
--- PASS: TestMinikubeProfile (129.68s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.9s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-303887 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E1006 01:19:15.522561   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/ingress-addon-legacy-989525/client.crt: no such file or directory
E1006 01:19:15.527884   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/ingress-addon-legacy-989525/client.crt: no such file or directory
E1006 01:19:15.538154   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/ingress-addon-legacy-989525/client.crt: no such file or directory
E1006 01:19:15.558388   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/ingress-addon-legacy-989525/client.crt: no such file or directory
E1006 01:19:15.598702   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/ingress-addon-legacy-989525/client.crt: no such file or directory
E1006 01:19:15.679047   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/ingress-addon-legacy-989525/client.crt: no such file or directory
E1006 01:19:15.839475   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/ingress-addon-legacy-989525/client.crt: no such file or directory
E1006 01:19:16.160221   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/ingress-addon-legacy-989525/client.crt: no such file or directory
E1006 01:19:16.801257   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/ingress-addon-legacy-989525/client.crt: no such file or directory
E1006 01:19:18.081768   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/ingress-addon-legacy-989525/client.crt: no such file or directory
E1006 01:19:19.941761   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/functional-362209/client.crt: no such file or directory
E1006 01:19:20.642550   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/ingress-addon-legacy-989525/client.crt: no such file or directory
E1006 01:19:25.762953   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/ingress-addon-legacy-989525/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-303887 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (26.895144211s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-303887 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-303887 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.99s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-319564 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E1006 01:19:36.004134   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/ingress-addon-legacy-989525/client.crt: no such file or directory
E1006 01:19:47.727164   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/client.crt: no such file or directory
E1006 01:19:56.484378   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/ingress-addon-legacy-989525/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-319564 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (26.985430828s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.99s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-319564 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-319564 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-303887 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-319564 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-319564 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-319564
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-319564: (1.217246915s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.04s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-319564
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-319564: (23.040388934s)
--- PASS: TestMountStart/serial/RestartStopped (24.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-319564 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-319564 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (133.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-865460 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E1006 01:20:37.445026   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/ingress-addon-legacy-989525/client.crt: no such file or directory
E1006 01:21:36.096603   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/functional-362209/client.crt: no such file or directory
E1006 01:21:59.365529   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/ingress-addon-legacy-989525/client.crt: no such file or directory
E1006 01:22:03.782500   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/functional-362209/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-865460 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m13.250545441s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (133.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-865460 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-865460 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-865460 -- rollout status deployment/busybox: (4.643768951s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-865460 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-865460 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-865460 -- exec busybox-5bc68d56bd-4xrpv -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-865460 -- exec busybox-5bc68d56bd-qrbtg -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-865460 -- exec busybox-5bc68d56bd-4xrpv -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-865460 -- exec busybox-5bc68d56bd-qrbtg -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-865460 -- exec busybox-5bc68d56bd-4xrpv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-865460 -- exec busybox-5bc68d56bd-qrbtg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.43s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-865460 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-865460 -- exec busybox-5bc68d56bd-4xrpv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-865460 -- exec busybox-5bc68d56bd-4xrpv -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-865460 -- exec busybox-5bc68d56bd-qrbtg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-865460 -- exec busybox-5bc68d56bd-qrbtg -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (44.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-865460 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-865460 -v 3 --alsologtostderr: (43.487562355s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (44.08s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 cp testdata/cp-test.txt multinode-865460:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 ssh -n multinode-865460 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 cp multinode-865460:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3170296298/001/cp-test_multinode-865460.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 ssh -n multinode-865460 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 cp multinode-865460:/home/docker/cp-test.txt multinode-865460-m02:/home/docker/cp-test_multinode-865460_multinode-865460-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 ssh -n multinode-865460 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 ssh -n multinode-865460-m02 "sudo cat /home/docker/cp-test_multinode-865460_multinode-865460-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 cp multinode-865460:/home/docker/cp-test.txt multinode-865460-m03:/home/docker/cp-test_multinode-865460_multinode-865460-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 ssh -n multinode-865460 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 ssh -n multinode-865460-m03 "sudo cat /home/docker/cp-test_multinode-865460_multinode-865460-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 cp testdata/cp-test.txt multinode-865460-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 ssh -n multinode-865460-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 cp multinode-865460-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3170296298/001/cp-test_multinode-865460-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 ssh -n multinode-865460-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 cp multinode-865460-m02:/home/docker/cp-test.txt multinode-865460:/home/docker/cp-test_multinode-865460-m02_multinode-865460.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 ssh -n multinode-865460-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 ssh -n multinode-865460 "sudo cat /home/docker/cp-test_multinode-865460-m02_multinode-865460.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 cp multinode-865460-m02:/home/docker/cp-test.txt multinode-865460-m03:/home/docker/cp-test_multinode-865460-m02_multinode-865460-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 ssh -n multinode-865460-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 ssh -n multinode-865460-m03 "sudo cat /home/docker/cp-test_multinode-865460-m02_multinode-865460-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 cp testdata/cp-test.txt multinode-865460-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 ssh -n multinode-865460-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 cp multinode-865460-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3170296298/001/cp-test_multinode-865460-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 ssh -n multinode-865460-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 cp multinode-865460-m03:/home/docker/cp-test.txt multinode-865460:/home/docker/cp-test_multinode-865460-m03_multinode-865460.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 ssh -n multinode-865460-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 ssh -n multinode-865460 "sudo cat /home/docker/cp-test_multinode-865460-m03_multinode-865460.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 cp multinode-865460-m03:/home/docker/cp-test.txt multinode-865460-m02:/home/docker/cp-test_multinode-865460-m03_multinode-865460-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 ssh -n multinode-865460-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 ssh -n multinode-865460-m02 "sudo cat /home/docker/cp-test_multinode-865460-m03_multinode-865460-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.81s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-865460 node stop m03: (1.33669429s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-865460 status: exit status 7 (442.452384ms)

                                                
                                                
-- stdout --
	multinode-865460
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-865460-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-865460-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-865460 status --alsologtostderr: exit status 7 (439.474558ms)

                                                
                                                
-- stdout --
	multinode-865460
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-865460-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-865460-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 01:23:44.360164   88937 out.go:296] Setting OutFile to fd 1 ...
	I1006 01:23:44.360295   88937 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 01:23:44.360304   88937 out.go:309] Setting ErrFile to fd 2...
	I1006 01:23:44.360313   88937 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 01:23:44.360613   88937 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-66550/.minikube/bin
	I1006 01:23:44.360840   88937 out.go:303] Setting JSON to false
	I1006 01:23:44.360886   88937 mustload.go:65] Loading cluster: multinode-865460
	I1006 01:23:44.360984   88937 notify.go:220] Checking for updates...
	I1006 01:23:44.361336   88937 config.go:182] Loaded profile config "multinode-865460": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
	I1006 01:23:44.361353   88937 status.go:255] checking status of multinode-865460 ...
	I1006 01:23:44.361839   88937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:23:44.361886   88937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:23:44.379938   88937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43069
	I1006 01:23:44.380382   88937 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:23:44.380892   88937 main.go:141] libmachine: Using API Version  1
	I1006 01:23:44.380922   88937 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:23:44.381273   88937 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:23:44.381458   88937 main.go:141] libmachine: (multinode-865460) Calling .GetState
	I1006 01:23:44.383015   88937 status.go:330] multinode-865460 host status = "Running" (err=<nil>)
	I1006 01:23:44.383037   88937 host.go:66] Checking if "multinode-865460" exists ...
	I1006 01:23:44.383400   88937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:23:44.383439   88937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:23:44.397023   88937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38961
	I1006 01:23:44.397429   88937 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:23:44.397883   88937 main.go:141] libmachine: Using API Version  1
	I1006 01:23:44.397905   88937 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:23:44.398193   88937 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:23:44.398358   88937 main.go:141] libmachine: (multinode-865460) Calling .GetIP
	I1006 01:23:44.400908   88937 main.go:141] libmachine: (multinode-865460) DBG | domain multinode-865460 has defined MAC address 52:54:00:4e:92:b6 in network mk-multinode-865460
	I1006 01:23:44.401270   88937 main.go:141] libmachine: (multinode-865460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:92:b6", ip: ""} in network mk-multinode-865460: {Iface:virbr1 ExpiryTime:2023-10-06 02:20:45 +0000 UTC Type:0 Mac:52:54:00:4e:92:b6 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-865460 Clientid:01:52:54:00:4e:92:b6}
	I1006 01:23:44.401301   88937 main.go:141] libmachine: (multinode-865460) DBG | domain multinode-865460 has defined IP address 192.168.39.159 and MAC address 52:54:00:4e:92:b6 in network mk-multinode-865460
	I1006 01:23:44.401405   88937 host.go:66] Checking if "multinode-865460" exists ...
	I1006 01:23:44.401691   88937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:23:44.401726   88937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:23:44.415644   88937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41069
	I1006 01:23:44.416052   88937 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:23:44.416508   88937 main.go:141] libmachine: Using API Version  1
	I1006 01:23:44.416548   88937 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:23:44.416846   88937 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:23:44.417047   88937 main.go:141] libmachine: (multinode-865460) Calling .DriverName
	I1006 01:23:44.417236   88937 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 01:23:44.417272   88937 main.go:141] libmachine: (multinode-865460) Calling .GetSSHHostname
	I1006 01:23:44.420193   88937 main.go:141] libmachine: (multinode-865460) DBG | domain multinode-865460 has defined MAC address 52:54:00:4e:92:b6 in network mk-multinode-865460
	I1006 01:23:44.420685   88937 main.go:141] libmachine: (multinode-865460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:92:b6", ip: ""} in network mk-multinode-865460: {Iface:virbr1 ExpiryTime:2023-10-06 02:20:45 +0000 UTC Type:0 Mac:52:54:00:4e:92:b6 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-865460 Clientid:01:52:54:00:4e:92:b6}
	I1006 01:23:44.420714   88937 main.go:141] libmachine: (multinode-865460) DBG | domain multinode-865460 has defined IP address 192.168.39.159 and MAC address 52:54:00:4e:92:b6 in network mk-multinode-865460
	I1006 01:23:44.420840   88937 main.go:141] libmachine: (multinode-865460) Calling .GetSSHPort
	I1006 01:23:44.421033   88937 main.go:141] libmachine: (multinode-865460) Calling .GetSSHKeyPath
	I1006 01:23:44.421185   88937 main.go:141] libmachine: (multinode-865460) Calling .GetSSHUsername
	I1006 01:23:44.421338   88937 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-66550/.minikube/machines/multinode-865460/id_rsa Username:docker}
	I1006 01:23:44.518087   88937 ssh_runner.go:195] Run: systemctl --version
	I1006 01:23:44.524308   88937 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 01:23:44.537585   88937 kubeconfig.go:92] found "multinode-865460" server: "https://192.168.39.159:8443"
	I1006 01:23:44.537615   88937 api_server.go:166] Checking apiserver status ...
	I1006 01:23:44.537674   88937 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 01:23:44.549934   88937 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup
	I1006 01:23:44.557827   88937 api_server.go:182] apiserver freezer: "11:freezer:/kubepods/burstable/pod4c5a6ee2238fc42d08cc2beb7bc33d6f/e8a719689a17e3e4159fbb3b881f3bcb4ad569aabe2f07bd74e2b2d7190dfaac"
	I1006 01:23:44.557886   88937 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4c5a6ee2238fc42d08cc2beb7bc33d6f/e8a719689a17e3e4159fbb3b881f3bcb4ad569aabe2f07bd74e2b2d7190dfaac/freezer.state
	I1006 01:23:44.565869   88937 api_server.go:204] freezer state: "THAWED"
	I1006 01:23:44.565894   88937 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1006 01:23:44.570863   88937 api_server.go:279] https://192.168.39.159:8443/healthz returned 200:
	ok
	I1006 01:23:44.570885   88937 status.go:421] multinode-865460 apiserver status = Running (err=<nil>)
	I1006 01:23:44.570896   88937 status.go:257] multinode-865460 status: &{Name:multinode-865460 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1006 01:23:44.570918   88937 status.go:255] checking status of multinode-865460-m02 ...
	I1006 01:23:44.571203   88937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:23:44.571250   88937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:23:44.585836   88937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41753
	I1006 01:23:44.586235   88937 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:23:44.586723   88937 main.go:141] libmachine: Using API Version  1
	I1006 01:23:44.586741   88937 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:23:44.587078   88937 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:23:44.587294   88937 main.go:141] libmachine: (multinode-865460-m02) Calling .GetState
	I1006 01:23:44.588746   88937 status.go:330] multinode-865460-m02 host status = "Running" (err=<nil>)
	I1006 01:23:44.588770   88937 host.go:66] Checking if "multinode-865460-m02" exists ...
	I1006 01:23:44.589069   88937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:23:44.589107   88937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:23:44.603058   88937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38241
	I1006 01:23:44.603431   88937 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:23:44.603835   88937 main.go:141] libmachine: Using API Version  1
	I1006 01:23:44.603857   88937 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:23:44.604137   88937 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:23:44.604328   88937 main.go:141] libmachine: (multinode-865460-m02) Calling .GetIP
	I1006 01:23:44.606987   88937 main.go:141] libmachine: (multinode-865460-m02) DBG | domain multinode-865460-m02 has defined MAC address 52:54:00:2a:4d:93 in network mk-multinode-865460
	I1006 01:23:44.607444   88937 main.go:141] libmachine: (multinode-865460-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:4d:93", ip: ""} in network mk-multinode-865460: {Iface:virbr1 ExpiryTime:2023-10-06 02:22:08 +0000 UTC Type:0 Mac:52:54:00:2a:4d:93 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-865460-m02 Clientid:01:52:54:00:2a:4d:93}
	I1006 01:23:44.607476   88937 main.go:141] libmachine: (multinode-865460-m02) DBG | domain multinode-865460-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:2a:4d:93 in network mk-multinode-865460
	I1006 01:23:44.607545   88937 host.go:66] Checking if "multinode-865460-m02" exists ...
	I1006 01:23:44.607845   88937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:23:44.607878   88937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:23:44.623034   88937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42553
	I1006 01:23:44.623407   88937 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:23:44.623807   88937 main.go:141] libmachine: Using API Version  1
	I1006 01:23:44.623829   88937 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:23:44.624141   88937 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:23:44.624293   88937 main.go:141] libmachine: (multinode-865460-m02) Calling .DriverName
	I1006 01:23:44.624454   88937 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 01:23:44.624473   88937 main.go:141] libmachine: (multinode-865460-m02) Calling .GetSSHHostname
	I1006 01:23:44.626792   88937 main.go:141] libmachine: (multinode-865460-m02) DBG | domain multinode-865460-m02 has defined MAC address 52:54:00:2a:4d:93 in network mk-multinode-865460
	I1006 01:23:44.627162   88937 main.go:141] libmachine: (multinode-865460-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:4d:93", ip: ""} in network mk-multinode-865460: {Iface:virbr1 ExpiryTime:2023-10-06 02:22:08 +0000 UTC Type:0 Mac:52:54:00:2a:4d:93 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-865460-m02 Clientid:01:52:54:00:2a:4d:93}
	I1006 01:23:44.627203   88937 main.go:141] libmachine: (multinode-865460-m02) DBG | domain multinode-865460-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:2a:4d:93 in network mk-multinode-865460
	I1006 01:23:44.627309   88937 main.go:141] libmachine: (multinode-865460-m02) Calling .GetSSHPort
	I1006 01:23:44.627484   88937 main.go:141] libmachine: (multinode-865460-m02) Calling .GetSSHKeyPath
	I1006 01:23:44.627641   88937 main.go:141] libmachine: (multinode-865460-m02) Calling .GetSSHUsername
	I1006 01:23:44.627751   88937 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17314-66550/.minikube/machines/multinode-865460-m02/id_rsa Username:docker}
	I1006 01:23:44.709985   88937 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 01:23:44.722054   88937 status.go:257] multinode-865460-m02 status: &{Name:multinode-865460-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1006 01:23:44.722090   88937 status.go:255] checking status of multinode-865460-m03 ...
	I1006 01:23:44.722603   88937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:23:44.722665   88937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:23:44.737171   88937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39993
	I1006 01:23:44.737580   88937 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:23:44.738046   88937 main.go:141] libmachine: Using API Version  1
	I1006 01:23:44.738069   88937 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:23:44.738399   88937 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:23:44.738602   88937 main.go:141] libmachine: (multinode-865460-m03) Calling .GetState
	I1006 01:23:44.740081   88937 status.go:330] multinode-865460-m03 host status = "Stopped" (err=<nil>)
	I1006 01:23:44.740096   88937 status.go:343] host is not running, skipping remaining checks
	I1006 01:23:44.740103   88937 status.go:257] multinode-865460-m03 status: &{Name:multinode-865460-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.22s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (27.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-865460 node start m03 --alsologtostderr: (27.227936329s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (27.90s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (308.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-865460
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-865460
E1006 01:24:15.523732   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/ingress-addon-legacy-989525/client.crt: no such file or directory
E1006 01:24:43.206466   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/ingress-addon-legacy-989525/client.crt: no such file or directory
E1006 01:24:47.727162   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/client.crt: no such file or directory
E1006 01:26:10.776734   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/client.crt: no such file or directory
E1006 01:26:36.095933   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/functional-362209/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-865460: (3m4.437361911s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-865460 --wait=true -v=8 --alsologtostderr
E1006 01:29:15.522534   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/ingress-addon-legacy-989525/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-865460 --wait=true -v=8 --alsologtostderr: (2m4.382244988s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-865460
--- PASS: TestMultiNode/serial/RestartKeepsNodes (308.94s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-865460 node delete m03: (1.224393678s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.78s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (183.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 stop
E1006 01:29:47.726814   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/client.crt: no such file or directory
E1006 01:31:36.096697   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/functional-362209/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-865460 stop: (3m3.753220277s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-865460 status: exit status 7 (101.258432ms)

                                                
                                                
-- stdout --
	multinode-865460
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-865460-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-865460 status --alsologtostderr: exit status 7 (97.07238ms)

                                                
                                                
-- stdout --
	multinode-865460
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-865460-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 01:32:27.275672   91095 out.go:296] Setting OutFile to fd 1 ...
	I1006 01:32:27.275799   91095 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 01:32:27.275808   91095 out.go:309] Setting ErrFile to fd 2...
	I1006 01:32:27.275813   91095 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 01:32:27.276009   91095 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-66550/.minikube/bin
	I1006 01:32:27.276194   91095 out.go:303] Setting JSON to false
	I1006 01:32:27.276228   91095 mustload.go:65] Loading cluster: multinode-865460
	I1006 01:32:27.276281   91095 notify.go:220] Checking for updates...
	I1006 01:32:27.276634   91095 config.go:182] Loaded profile config "multinode-865460": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
	I1006 01:32:27.276647   91095 status.go:255] checking status of multinode-865460 ...
	I1006 01:32:27.277043   91095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:32:27.277098   91095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:32:27.296384   91095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33945
	I1006 01:32:27.296807   91095 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:32:27.297357   91095 main.go:141] libmachine: Using API Version  1
	I1006 01:32:27.297389   91095 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:32:27.297716   91095 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:32:27.297915   91095 main.go:141] libmachine: (multinode-865460) Calling .GetState
	I1006 01:32:27.299394   91095 status.go:330] multinode-865460 host status = "Stopped" (err=<nil>)
	I1006 01:32:27.299409   91095 status.go:343] host is not running, skipping remaining checks
	I1006 01:32:27.299416   91095 status.go:257] multinode-865460 status: &{Name:multinode-865460 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1006 01:32:27.299446   91095 status.go:255] checking status of multinode-865460-m02 ...
	I1006 01:32:27.299699   91095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1006 01:32:27.299735   91095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1006 01:32:27.313066   91095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43015
	I1006 01:32:27.313401   91095 main.go:141] libmachine: () Calling .GetVersion
	I1006 01:32:27.313916   91095 main.go:141] libmachine: Using API Version  1
	I1006 01:32:27.313960   91095 main.go:141] libmachine: () Calling .SetConfigRaw
	I1006 01:32:27.314237   91095 main.go:141] libmachine: () Calling .GetMachineName
	I1006 01:32:27.314429   91095 main.go:141] libmachine: (multinode-865460-m02) Calling .GetState
	I1006 01:32:27.315754   91095 status.go:330] multinode-865460-m02 host status = "Stopped" (err=<nil>)
	I1006 01:32:27.315770   91095 status.go:343] host is not running, skipping remaining checks
	I1006 01:32:27.315777   91095 status.go:257] multinode-865460-m02 status: &{Name:multinode-865460-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (183.95s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (91.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-865460 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E1006 01:32:59.143491   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/functional-362209/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-865460 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m31.185497548s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-865460 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (91.73s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (64.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-865460
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-865460-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-865460-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (78.017953ms)

                                                
                                                
-- stdout --
	* [multinode-865460-m02] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17314
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17314-66550/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-66550/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-865460-m02' is duplicated with machine name 'multinode-865460-m02' in profile 'multinode-865460'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-865460-m03 --driver=kvm2  --container-runtime=containerd
E1006 01:34:15.522624   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/ingress-addon-legacy-989525/client.crt: no such file or directory
E1006 01:34:47.727415   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-865460-m03 --driver=kvm2  --container-runtime=containerd: (1m3.760451073s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-865460
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-865460: exit status 80 (233.194077ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-865460
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-865460-m03 already exists in multinode-865460-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-865460-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (64.95s)

                                                
                                    
x
+
TestPreload (265.63s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-058345 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E1006 01:35:38.567588   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/ingress-addon-legacy-989525/client.crt: no such file or directory
E1006 01:36:36.096232   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/functional-362209/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-058345 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m34.970964447s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-058345 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-058345 image pull gcr.io/k8s-minikube/busybox: (3.133139651s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-058345
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-058345: (1m31.746142408s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-058345 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
E1006 01:39:15.521582   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/ingress-addon-legacy-989525/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-058345 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (1m14.693291791s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-058345 image list
helpers_test.go:175: Cleaning up "test-preload-058345" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-058345
--- PASS: TestPreload (265.63s)

                                                
                                    
x
+
TestScheduledStopUnix (134.33s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-601697 --memory=2048 --driver=kvm2  --container-runtime=containerd
E1006 01:39:47.727664   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-601697 --memory=2048 --driver=kvm2  --container-runtime=containerd: (1m2.427456205s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-601697 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-601697 -n scheduled-stop-601697
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-601697 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-601697 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-601697 -n scheduled-stop-601697
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-601697
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-601697 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1006 01:41:36.096085   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/functional-362209/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-601697
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-601697: exit status 7 (81.063745ms)

                                                
                                                
-- stdout --
	scheduled-stop-601697
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-601697 -n scheduled-stop-601697
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-601697 -n scheduled-stop-601697: exit status 7 (76.319006ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-601697" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-601697
--- PASS: TestScheduledStopUnix (134.33s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (174.03s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.26.0.2740314816.exe start -p running-upgrade-111752 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.26.0.2740314816.exe start -p running-upgrade-111752 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m51.439386296s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-111752 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E1006 01:46:36.095997   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/functional-362209/client.crt: no such file or directory
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-111752 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (56.747139611s)
helpers_test.go:175: Cleaning up "running-upgrade-111752" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-111752
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-111752: (1.278024766s)
--- PASS: TestRunningBinaryUpgrade (174.03s)

                                                
                                    
x
+
TestKubernetesUpgrade (194.18s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-302150 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-302150 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m34.738683419s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-302150
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-302150: (12.155104896s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-302150 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-302150 status --format={{.Host}}: exit status 7 (98.470166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-302150 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-302150 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (54.358529847s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-302150 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-302150 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-302150 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (131.354187ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-302150] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17314
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17314-66550/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-66550/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-302150
	    minikube start -p kubernetes-upgrade-302150 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3021502 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.2, by running:
	    
	    minikube start -p kubernetes-upgrade-302150 --kubernetes-version=v1.28.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-302150 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-302150 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (30.830189011s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-302150" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-302150
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-302150: (1.788817716s)
--- PASS: TestKubernetesUpgrade (194.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-621685 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-621685 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (101.840063ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-621685] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17314
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17314-66550/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-66550/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (120.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-621685 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-621685 --driver=kvm2  --container-runtime=containerd: (2m0.477432863s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-621685 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (120.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-858422 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-858422 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (126.851389ms)

                                                
                                                
-- stdout --
	* [false-858422] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17314
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17314-66550/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-66550/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 01:41:48.988485   95127 out.go:296] Setting OutFile to fd 1 ...
	I1006 01:41:48.988636   95127 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 01:41:48.988645   95127 out.go:309] Setting ErrFile to fd 2...
	I1006 01:41:48.988651   95127 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1006 01:41:48.988946   95127 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17314-66550/.minikube/bin
	I1006 01:41:48.990085   95127 out.go:303] Setting JSON to false
	I1006 01:41:48.991042   95127 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8652,"bootTime":1696547857,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1006 01:41:48.991106   95127 start.go:138] virtualization: kvm guest
	I1006 01:41:48.993370   95127 out.go:177] * [false-858422] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1006 01:41:48.995303   95127 out.go:177]   - MINIKUBE_LOCATION=17314
	I1006 01:41:48.995345   95127 notify.go:220] Checking for updates...
	I1006 01:41:48.996712   95127 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 01:41:48.998060   95127 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17314-66550/kubeconfig
	I1006 01:41:48.999302   95127 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17314-66550/.minikube
	I1006 01:41:49.000644   95127 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1006 01:41:49.002152   95127 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 01:41:49.004354   95127 config.go:182] Loaded profile config "NoKubernetes-621685": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
	I1006 01:41:49.004504   95127 config.go:182] Loaded profile config "force-systemd-env-665218": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
	I1006 01:41:49.004652   95127 config.go:182] Loaded profile config "offline-containerd-691771": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.2
	I1006 01:41:49.004758   95127 driver.go:378] Setting default libvirt URI to qemu:///system
	I1006 01:41:49.040676   95127 out.go:177] * Using the kvm2 driver based on user configuration
	I1006 01:41:49.041966   95127 start.go:298] selected driver: kvm2
	I1006 01:41:49.041979   95127 start.go:902] validating driver "kvm2" against <nil>
	I1006 01:41:49.041994   95127 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 01:41:49.043982   95127 out.go:177] 
	W1006 01:41:49.045191   95127 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1006 01:41:49.046420   95127 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-858422 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-858422

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-858422

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-858422

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-858422

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-858422

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-858422

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-858422

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-858422

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-858422

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-858422

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-858422"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-858422"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-858422"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-858422

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-858422"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-858422"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-858422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-858422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-858422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-858422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-858422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-858422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-858422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-858422" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-858422"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-858422"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-858422"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-858422"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-858422"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-858422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-858422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-858422" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-858422"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-858422"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-858422"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-858422"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-858422"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-858422

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-858422"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-858422"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-858422"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-858422"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-858422"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-858422"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-858422"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-858422"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-858422"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-858422"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-858422"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-858422"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-858422"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-858422"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-858422"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-858422"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-858422"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-858422"

                                                
                                                
----------------------- debugLogs end: false-858422 [took: 3.092460351s] --------------------------------
helpers_test.go:175: Cleaning up "false-858422" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-858422
--- PASS: TestNetworkPlugins/group/false (3.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (20.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-621685 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-621685 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (19.462381519s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-621685 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-621685 status -o json: exit status 2 (271.313663ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-621685","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-621685
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (20.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (29.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-621685 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-621685 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (29.503929298s)
--- PASS: TestNoKubernetes/serial/Start (29.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-621685 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-621685 "sudo systemctl is-active --quiet service kubelet": exit status 1 (218.462816ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-621685
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-621685: (1.251195413s)
--- PASS: TestNoKubernetes/serial/Stop (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (46.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-621685 --driver=kvm2  --container-runtime=containerd
E1006 01:44:47.727316   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-621685 --driver=kvm2  --container-runtime=containerd: (46.118827483s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (46.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-621685 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-621685 "sudo systemctl is-active --quiet service kubelet": exit status 1 (227.193927ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestPause/serial/Start (157.56s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-964356 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-964356 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (2m37.56293206s)
--- PASS: TestPause/serial/Start (157.56s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (123.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.26.0.1757492470.exe start -p stopped-upgrade-728251 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.26.0.1757492470.exe start -p stopped-upgrade-728251 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m12.275290675s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.26.0.1757492470.exe -p stopped-upgrade-728251 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.26.0.1757492470.exe -p stopped-upgrade-728251 stop: (1.501185406s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-728251 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:211: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-728251 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (50.207259277s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (123.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (123.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-858422 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-858422 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (2m3.637491253s)
--- PASS: TestNetworkPlugins/group/auto/Start (123.64s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (21.64s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-964356 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-964356 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (21.627097731s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (21.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (105.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-858422 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-858422 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m45.436403464s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (105.44s)

                                                
                                    
x
+
TestPause/serial/Pause (0.77s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-964356 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.77s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.29s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-964356 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-964356 --output=json --layout=cluster: exit status 2 (288.273291ms)

                                                
                                                
-- stdout --
	{"Name":"pause-964356","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-964356","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.29s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.93s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-964356 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.93s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.91s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-964356 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.91s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.13s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-964356 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-964356 --alsologtostderr -v=5: (1.126713389s)
--- PASS: TestPause/serial/DeletePaused (1.13s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (3.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.339976137s)
--- PASS: TestPause/serial/VerifyDeletedResources (3.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (157.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-858422 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
E1006 01:49:15.521610   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/ingress-addon-legacy-989525/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-858422 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (2m37.481968432s)
--- PASS: TestNetworkPlugins/group/calico/Start (157.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.19s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-728251
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-728251: (1.194059474s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (123.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-858422 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
E1006 01:49:39.144294   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/functional-362209/client.crt: no such file or directory
E1006 01:49:47.727514   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-858422 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (2m3.742117987s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (123.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-858422 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-858422 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rwq4q" [803a2761-0bf9-42f5-849a-5d18e416f949] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-rwq4q" [803a2761-0bf9-42f5-849a-5d18e416f949] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.014365253s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-858422 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-858422 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-858422 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-s4dh8" [3bd65797-a5b9-4a4a-9fca-0c409b278113] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.02603646s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-858422 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-858422 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-sdf5j" [b9e7a26c-4edc-4e7a-80b6-483ad4816e3e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-sdf5j" [b9e7a26c-4edc-4e7a-80b6-483ad4816e3e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.022314884s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (91.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-858422 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-858422 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m31.080798574s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (91.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-858422 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-858422 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-858422 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (122.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-858422 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-858422 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (2m2.478291321s)
--- PASS: TestNetworkPlugins/group/flannel/Start (122.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-brnsr" [ab371e22-dfc4-4baa-842d-3e88e050cf80] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.034865498s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-858422 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-858422 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-858422 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-h4487" [85eb3489-7a23-40f8-bdea-515ab8010830] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-h4487" [85eb3489-7a23-40f8-bdea-515ab8010830] Running
E1006 01:51:36.096350   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/functional-362209/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.033270079s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-858422 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-f2lnl" [4d0f8e0e-8698-480f-848e-a341d450ad75] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-f2lnl" [4d0f8e0e-8698-480f-848e-a341d450ad75] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.012388813s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-858422 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-858422 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-858422 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-858422 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-858422 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-858422 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (126.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-858422 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-858422 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (2m6.35859073s)
--- PASS: TestNetworkPlugins/group/bridge/Start (126.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (159s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-316702 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-316702 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m38.997312178s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (159.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-858422 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-858422 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-tsvlv" [2445dcef-5ad6-4ad0-8484-15028bb8e743] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-tsvlv" [2445dcef-5ad6-4ad0-8484-15028bb8e743] Running
E1006 01:52:18.568717   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/ingress-addon-legacy-989525/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.015874046s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-858422 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-858422 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-858422 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (91.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-301817 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-301817 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.2: (1m31.61456108s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (91.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-6ffc8" [4d672308-ba87-4505-ba47-03f9ab4be346] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.027668899s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-858422 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-858422 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-tdjts" [098eb15b-34fa-46c5-a596-34d6c5932680] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-tdjts" [098eb15b-34fa-46c5-a596-34d6c5932680] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.010429374s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-858422 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-858422 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-858422 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (88.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-475769 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-475769 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.2: (1m28.455871258s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (88.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-858422 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-858422 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-vscln" [d8d1e6d1-dc41-4e10-a888-cdb06611f738] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-vscln" [d8d1e6d1-dc41-4e10-a888-cdb06611f738] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.011996305s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (12.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-301817 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f987b2d1-430b-43e2-81ce-599a80ea52e4] Pending
helpers_test.go:344: "busybox" [f987b2d1-430b-43e2-81ce-599a80ea52e4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1006 01:54:15.522558   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/ingress-addon-legacy-989525/client.crt: no such file or directory
helpers_test.go:344: "busybox" [f987b2d1-430b-43e2-81ce-599a80ea52e4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 12.024616873s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-301817 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (12.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-858422 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-858422 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-858422 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)
E1006 02:03:04.232378   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/flannel-858422/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-301817 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-301817 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.354024232s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-301817 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (92.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-301817 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-301817 --alsologtostderr -v=3: (1m32.108986217s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (92.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-808329 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-808329 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.2: (1m19.506126456s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-316702 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a1332e8b-2eb2-4f61-bbb8-583858331900] Pending
helpers_test.go:344: "busybox" [a1332e8b-2eb2-4f61-bbb8-583858331900] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1006 01:54:47.727570   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/client.crt: no such file or directory
helpers_test.go:344: "busybox" [a1332e8b-2eb2-4f61-bbb8-583858331900] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.035901945s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-316702 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-316702 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-316702 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (92.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-316702 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-316702 --alsologtostderr -v=3: (1m32.280607041s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (92.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-475769 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5388acd4-5646-47f1-b4d5-c20966f1e1e8] Pending
helpers_test.go:344: "busybox" [5388acd4-5646-47f1-b4d5-c20966f1e1e8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1006 01:55:11.595562   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/auto-858422/client.crt: no such file or directory
E1006 01:55:11.600886   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/auto-858422/client.crt: no such file or directory
E1006 01:55:11.611156   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/auto-858422/client.crt: no such file or directory
E1006 01:55:11.631432   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/auto-858422/client.crt: no such file or directory
E1006 01:55:11.671741   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/auto-858422/client.crt: no such file or directory
E1006 01:55:11.752328   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/auto-858422/client.crt: no such file or directory
E1006 01:55:11.912645   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/auto-858422/client.crt: no such file or directory
E1006 01:55:12.233043   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/auto-858422/client.crt: no such file or directory
E1006 01:55:12.873909   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/auto-858422/client.crt: no such file or directory
E1006 01:55:14.154525   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/auto-858422/client.crt: no such file or directory
helpers_test.go:344: "busybox" [5388acd4-5646-47f1-b4d5-c20966f1e1e8] Running
E1006 01:55:16.715407   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/auto-858422/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 12.10108916s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-475769 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (12.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-475769 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-475769 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.221384372s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-475769 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-475769 --alsologtostderr -v=3
E1006 01:55:21.835656   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/auto-858422/client.crt: no such file or directory
E1006 01:55:25.733156   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/kindnet-858422/client.crt: no such file or directory
E1006 01:55:25.738391   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/kindnet-858422/client.crt: no such file or directory
E1006 01:55:25.748628   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/kindnet-858422/client.crt: no such file or directory
E1006 01:55:25.768908   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/kindnet-858422/client.crt: no such file or directory
E1006 01:55:25.809249   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/kindnet-858422/client.crt: no such file or directory
E1006 01:55:25.889892   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/kindnet-858422/client.crt: no such file or directory
E1006 01:55:26.050849   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/kindnet-858422/client.crt: no such file or directory
E1006 01:55:26.371560   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/kindnet-858422/client.crt: no such file or directory
E1006 01:55:27.012169   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/kindnet-858422/client.crt: no such file or directory
E1006 01:55:28.292957   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/kindnet-858422/client.crt: no such file or directory
E1006 01:55:30.854103   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/kindnet-858422/client.crt: no such file or directory
E1006 01:55:32.076561   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/auto-858422/client.crt: no such file or directory
E1006 01:55:35.974254   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/kindnet-858422/client.crt: no such file or directory
E1006 01:55:46.214865   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/kindnet-858422/client.crt: no such file or directory
E1006 01:55:52.557491   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/auto-858422/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-475769 --alsologtostderr -v=3: (1m31.771256807s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-808329 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7756db56-21df-4066-8562-02730cccaf99] Pending
helpers_test.go:344: "busybox" [7756db56-21df-4066-8562-02730cccaf99] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7756db56-21df-4066-8562-02730cccaf99] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.034510798s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-808329 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-301817 -n no-preload-301817
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-301817 -n no-preload-301817: exit status 7 (76.713283ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-301817 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (328.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-301817 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-301817 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.2: (5m28.534592328s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-301817 -n no-preload-301817
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (328.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-808329 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1006 01:56:06.695671   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/kindnet-858422/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-808329 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.072036341s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-808329 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (92.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-808329 --alsologtostderr -v=3
E1006 01:56:25.835024   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/calico-858422/client.crt: no such file or directory
E1006 01:56:25.840281   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/calico-858422/client.crt: no such file or directory
E1006 01:56:25.850529   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/calico-858422/client.crt: no such file or directory
E1006 01:56:25.870845   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/calico-858422/client.crt: no such file or directory
E1006 01:56:25.911186   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/calico-858422/client.crt: no such file or directory
E1006 01:56:25.991486   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/calico-858422/client.crt: no such file or directory
E1006 01:56:26.151629   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/calico-858422/client.crt: no such file or directory
E1006 01:56:26.472020   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/calico-858422/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-808329 --alsologtostderr -v=3: (1m32.046292219s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (92.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-316702 -n old-k8s-version-316702
E1006 01:56:27.112905   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/calico-858422/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-316702 -n old-k8s-version-316702: exit status 7 (82.350704ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-316702 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (451.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-316702 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
E1006 01:56:28.393476   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/calico-858422/client.crt: no such file or directory
E1006 01:56:30.954101   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/calico-858422/client.crt: no such file or directory
E1006 01:56:31.430760   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/custom-flannel-858422/client.crt: no such file or directory
E1006 01:56:31.436068   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/custom-flannel-858422/client.crt: no such file or directory
E1006 01:56:31.446339   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/custom-flannel-858422/client.crt: no such file or directory
E1006 01:56:31.466632   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/custom-flannel-858422/client.crt: no such file or directory
E1006 01:56:31.506950   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/custom-flannel-858422/client.crt: no such file or directory
E1006 01:56:31.587247   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/custom-flannel-858422/client.crt: no such file or directory
E1006 01:56:31.747742   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/custom-flannel-858422/client.crt: no such file or directory
E1006 01:56:32.067930   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/custom-flannel-858422/client.crt: no such file or directory
E1006 01:56:32.709077   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/custom-flannel-858422/client.crt: no such file or directory
E1006 01:56:33.518521   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/auto-858422/client.crt: no such file or directory
E1006 01:56:33.989730   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/custom-flannel-858422/client.crt: no such file or directory
E1006 01:56:36.075161   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/calico-858422/client.crt: no such file or directory
E1006 01:56:36.096406   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/functional-362209/client.crt: no such file or directory
E1006 01:56:36.550801   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/custom-flannel-858422/client.crt: no such file or directory
E1006 01:56:41.670997   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/custom-flannel-858422/client.crt: no such file or directory
E1006 01:56:46.316099   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/calico-858422/client.crt: no such file or directory
E1006 01:56:47.656473   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/kindnet-858422/client.crt: no such file or directory
E1006 01:56:51.911875   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/custom-flannel-858422/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-316702 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (7m30.964838807s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-316702 -n old-k8s-version-316702
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (451.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-475769 -n embed-certs-475769
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-475769 -n embed-certs-475769: exit status 7 (104.689998ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-475769 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (331.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-475769 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.2
E1006 01:57:06.797196   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/calico-858422/client.crt: no such file or directory
E1006 01:57:12.392208   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/custom-flannel-858422/client.crt: no such file or directory
E1006 01:57:12.620630   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/enable-default-cni-858422/client.crt: no such file or directory
E1006 01:57:12.625922   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/enable-default-cni-858422/client.crt: no such file or directory
E1006 01:57:12.636143   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/enable-default-cni-858422/client.crt: no such file or directory
E1006 01:57:12.656409   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/enable-default-cni-858422/client.crt: no such file or directory
E1006 01:57:12.696717   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/enable-default-cni-858422/client.crt: no such file or directory
E1006 01:57:12.777120   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/enable-default-cni-858422/client.crt: no such file or directory
E1006 01:57:12.937623   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/enable-default-cni-858422/client.crt: no such file or directory
E1006 01:57:13.258610   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/enable-default-cni-858422/client.crt: no such file or directory
E1006 01:57:13.898902   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/enable-default-cni-858422/client.crt: no such file or directory
E1006 01:57:15.179311   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/enable-default-cni-858422/client.crt: no such file or directory
E1006 01:57:17.740137   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/enable-default-cni-858422/client.crt: no such file or directory
E1006 01:57:22.860710   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/enable-default-cni-858422/client.crt: no such file or directory
E1006 01:57:33.101794   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/enable-default-cni-858422/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-475769 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.2: (5m30.890071402s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-475769 -n embed-certs-475769
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (331.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-808329 -n default-k8s-diff-port-808329
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-808329 -n default-k8s-diff-port-808329: exit status 7 (81.497479ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-808329 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (332.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-808329 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.2
E1006 01:57:47.757732   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/calico-858422/client.crt: no such file or directory
E1006 01:57:53.352440   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/custom-flannel-858422/client.crt: no such file or directory
E1006 01:57:53.582889   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/enable-default-cni-858422/client.crt: no such file or directory
E1006 01:57:55.439196   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/auto-858422/client.crt: no such file or directory
E1006 01:58:04.232291   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/flannel-858422/client.crt: no such file or directory
E1006 01:58:04.237594   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/flannel-858422/client.crt: no such file or directory
E1006 01:58:04.247847   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/flannel-858422/client.crt: no such file or directory
E1006 01:58:04.268165   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/flannel-858422/client.crt: no such file or directory
E1006 01:58:04.308534   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/flannel-858422/client.crt: no such file or directory
E1006 01:58:04.388874   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/flannel-858422/client.crt: no such file or directory
E1006 01:58:04.549320   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/flannel-858422/client.crt: no such file or directory
E1006 01:58:04.869866   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/flannel-858422/client.crt: no such file or directory
E1006 01:58:05.510814   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/flannel-858422/client.crt: no such file or directory
E1006 01:58:06.791827   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/flannel-858422/client.crt: no such file or directory
E1006 01:58:09.352099   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/flannel-858422/client.crt: no such file or directory
E1006 01:58:09.577535   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/kindnet-858422/client.crt: no such file or directory
E1006 01:58:14.472789   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/flannel-858422/client.crt: no such file or directory
E1006 01:58:24.713658   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/flannel-858422/client.crt: no such file or directory
E1006 01:58:34.543529   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/enable-default-cni-858422/client.crt: no such file or directory
E1006 01:58:45.194270   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/flannel-858422/client.crt: no such file or directory
E1006 01:59:07.296908   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/bridge-858422/client.crt: no such file or directory
E1006 01:59:07.302232   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/bridge-858422/client.crt: no such file or directory
E1006 01:59:07.312563   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/bridge-858422/client.crt: no such file or directory
E1006 01:59:07.332904   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/bridge-858422/client.crt: no such file or directory
E1006 01:59:07.373244   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/bridge-858422/client.crt: no such file or directory
E1006 01:59:07.453604   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/bridge-858422/client.crt: no such file or directory
E1006 01:59:07.613864   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/bridge-858422/client.crt: no such file or directory
E1006 01:59:07.934313   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/bridge-858422/client.crt: no such file or directory
E1006 01:59:08.574466   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/bridge-858422/client.crt: no such file or directory
E1006 01:59:09.677956   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/calico-858422/client.crt: no such file or directory
E1006 01:59:09.855280   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/bridge-858422/client.crt: no such file or directory
E1006 01:59:12.416088   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/bridge-858422/client.crt: no such file or directory
E1006 01:59:15.273468   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/custom-flannel-858422/client.crt: no such file or directory
E1006 01:59:15.521746   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/ingress-addon-legacy-989525/client.crt: no such file or directory
E1006 01:59:17.536949   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/bridge-858422/client.crt: no such file or directory
E1006 01:59:26.155067   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/flannel-858422/client.crt: no such file or directory
E1006 01:59:27.777884   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/bridge-858422/client.crt: no such file or directory
E1006 01:59:30.777651   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/client.crt: no such file or directory
E1006 01:59:47.726630   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/addons-565340/client.crt: no such file or directory
E1006 01:59:48.258252   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/bridge-858422/client.crt: no such file or directory
E1006 01:59:56.464324   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/enable-default-cni-858422/client.crt: no such file or directory
E1006 02:00:11.596447   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/auto-858422/client.crt: no such file or directory
E1006 02:00:25.733873   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/kindnet-858422/client.crt: no such file or directory
E1006 02:00:29.218495   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/bridge-858422/client.crt: no such file or directory
E1006 02:00:39.280095   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/auto-858422/client.crt: no such file or directory
E1006 02:00:48.075323   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/flannel-858422/client.crt: no such file or directory
E1006 02:00:53.417776   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/kindnet-858422/client.crt: no such file or directory
E1006 02:01:25.834505   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/calico-858422/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-808329 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.2: (5m31.880736141s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-808329 -n default-k8s-diff-port-808329
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (332.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-72k5j" [ddc0b9e6-5142-49d8-bda1-4f879e44b147] Running
E1006 02:01:31.430125   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/custom-flannel-858422/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.019807146s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-72k5j" [ddc0b9e6-5142-49d8-bda1-4f879e44b147] Running
E1006 02:01:36.096401   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/functional-362209/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010424278s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-301817 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-301817 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-301817 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-301817 -n no-preload-301817
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-301817 -n no-preload-301817: exit status 2 (269.439326ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-301817 -n no-preload-301817
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-301817 -n no-preload-301817: exit status 2 (271.867016ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-301817 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-301817 -n no-preload-301817
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-301817 -n no-preload-301817
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (86.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-082724 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.2
E1006 02:01:51.139428   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/bridge-858422/client.crt: no such file or directory
E1006 02:01:53.519028   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/calico-858422/client.crt: no such file or directory
E1006 02:01:59.113819   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/custom-flannel-858422/client.crt: no such file or directory
E1006 02:02:12.620934   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/enable-default-cni-858422/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-082724 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.2: (1m26.206859423s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (86.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (16.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xz6nr" [816ca756-ab7c-49c0-a0c4-549ec260d5f8] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xz6nr" [816ca756-ab7c-49c0-a0c4-549ec260d5f8] Running
E1006 02:02:40.305344   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/enable-default-cni-858422/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.025861079s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (16.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xz6nr" [816ca756-ab7c-49c0-a0c4-549ec260d5f8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.018458375s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-475769 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-475769 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-475769 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-475769 --alsologtostderr -v=1: (1.011946404s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-475769 -n embed-certs-475769
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-475769 -n embed-certs-475769: exit status 2 (369.639888ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-475769 -n embed-certs-475769
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-475769 -n embed-certs-475769: exit status 2 (309.329262ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-475769 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-475769 -n embed-certs-475769
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-475769 -n embed-certs-475769
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-082724 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-082724 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.637463633s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-082724 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-082724 --alsologtostderr -v=3: (7.151068078s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (19.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-j955j" [7c0f4ff2-dd06-4421-88d6-d26935b8c7d7] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-j955j" [7c0f4ff2-dd06-4421-88d6-d26935b8c7d7] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 19.024770977s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (19.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-082724 -n newest-cni-082724
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-082724 -n newest-cni-082724: exit status 7 (77.19872ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-082724 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (49.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-082724 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-082724 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.2: (49.402943823s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-082724 -n newest-cni-082724
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (49.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-j955j" [7c0f4ff2-dd06-4421-88d6-d26935b8c7d7] Running
E1006 02:03:31.916399   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/flannel-858422/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01375013s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-808329 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-808329 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-808329 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-808329 -n default-k8s-diff-port-808329
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-808329 -n default-k8s-diff-port-808329: exit status 2 (260.089788ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-808329 -n default-k8s-diff-port-808329
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-808329 -n default-k8s-diff-port-808329: exit status 2 (247.571509ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-808329 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-808329 -n default-k8s-diff-port-808329
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-808329 -n default-k8s-diff-port-808329
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-pbwfw" [ff568f9b-f9b3-4e38-9635-f951cb03d977] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01939547s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-pbwfw" [ff568f9b-f9b3-4e38-9635-f951cb03d977] Running
E1006 02:04:07.296923   73840 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17314-66550/.minikube/profiles/bridge-858422/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011691319s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-316702 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-082724 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-082724 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-082724 -n newest-cni-082724
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-082724 -n newest-cni-082724: exit status 2 (275.210987ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-082724 -n newest-cni-082724
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-082724 -n newest-cni-082724: exit status 2 (289.528894ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-082724 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-082724 -n newest-cni-082724
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-082724 -n newest-cni-082724
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-316702 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-316702 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-316702 -n old-k8s-version-316702
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-316702 -n old-k8s-version-316702: exit status 2 (282.439047ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-316702 -n old-k8s-version-316702
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-316702 -n old-k8s-version-316702: exit status 2 (272.313726ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-316702 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-316702 -n old-k8s-version-316702
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-316702 -n old-k8s-version-316702
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.37s)

                                                
                                    

Test skip (36/306)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.28.2/cached-images 0
13 TestDownloadOnly/v1.28.2/binaries 0
14 TestDownloadOnly/v1.28.2/kubectl 0
18 TestDownloadOnlyKic 0
32 TestAddons/parallel/Olm 0
44 TestDockerFlags 0
47 TestDockerEnvContainerd 0
49 TestHyperKitDriverInstallOrUpdate 0
50 TestHyperkitDriverSkipUpgrade 0
101 TestFunctional/parallel/DockerEnv 0
102 TestFunctional/parallel/PodmanEnv 0
118 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
119 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
120 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
124 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
150 TestGvisorAddon 0
151 TestImageBuild 0
184 TestKicCustomNetwork 0
185 TestKicExistingNetwork 0
186 TestKicCustomSubnet 0
187 TestKicStaticIP 0
218 TestChangeNoneUser 0
221 TestScheduledStopWindows 0
223 TestSkaffold 0
225 TestInsufficientStorage 0
229 TestMissingContainerUpgrade 0
234 TestNetworkPlugins/group/kubenet 3.34
243 TestNetworkPlugins/group/cilium 3.63
251 TestStartStop/group/disable-driver-mounts 0.16
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-858422 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-858422

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-858422

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-858422

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-858422

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-858422

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-858422

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-858422

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-858422

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-858422

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-858422

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-858422"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-858422"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-858422"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-858422

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-858422"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-858422"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-858422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-858422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-858422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-858422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-858422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-858422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-858422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-858422" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-858422"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-858422"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-858422"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-858422"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-858422"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-858422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-858422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-858422" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-858422"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-858422"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-858422"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-858422"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-858422"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-858422

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-858422"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-858422"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-858422"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-858422"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-858422"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-858422"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-858422"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-858422"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-858422"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-858422"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-858422"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-858422"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-858422"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-858422"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-858422"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-858422"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-858422"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-858422"

                                                
                                                
----------------------- debugLogs end: kubenet-858422 [took: 3.186988008s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-858422" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-858422
--- SKIP: TestNetworkPlugins/group/kubenet (3.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-858422 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-858422

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-858422

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-858422

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-858422

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-858422

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-858422

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-858422

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-858422

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-858422

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-858422

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-858422"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-858422"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-858422"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-858422

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-858422"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-858422"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-858422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-858422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-858422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-858422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-858422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-858422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-858422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-858422" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-858422"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-858422"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-858422"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-858422"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-858422"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-858422

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-858422

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-858422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-858422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-858422

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-858422

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-858422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-858422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-858422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-858422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-858422" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-858422"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-858422"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-858422"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-858422"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-858422"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-858422

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-858422"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-858422"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-858422"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-858422"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-858422"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-858422"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-858422"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-858422"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-858422"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-858422"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-858422"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-858422"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-858422"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-858422"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-858422"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-858422"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-858422"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-858422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-858422"

                                                
                                                
----------------------- debugLogs end: cilium-858422 [took: 3.481698794s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-858422" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-858422
--- SKIP: TestNetworkPlugins/group/cilium (3.63s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-957818" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-957818
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard