=== RUN TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 11.257644ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-2hmfs" [7e30047c-df90-44cb-b9a2-98b6574dd90f] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00411993s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-z78qc" [afce7356-364e-4145-824f-b686975f47b9] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005051033s
addons_test.go:340: (dbg) Run: kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (33.296979269s)
-- stdout --
pod "registry-test" deleted
-- /stdout --
** stderr **
Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/registry-test, falling back to streaming logs:
pod default/registry-test terminated (Error)
** /stderr **
addons_test.go:347: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:351: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:359: (dbg) Run: out/minikube-linux-amd64 -p minikube ip
2024/03/27 19:48:47 [DEBUG] GET http://10.128.15.240:5000
2024/03/27 19:48:47 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:48:47 [DEBUG] GET http://10.128.15.240:5000: retrying in 1s (4 left)
2024/03/27 19:48:48 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:48:48 [DEBUG] GET http://10.128.15.240:5000: retrying in 2s (3 left)
2024/03/27 19:48:50 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:48:50 [DEBUG] GET http://10.128.15.240:5000: retrying in 4s (2 left)
2024/03/27 19:48:54 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:48:54 [DEBUG] GET http://10.128.15.240:5000: retrying in 8s (1 left)
2024/03/27 19:49:02 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:49:02 [DEBUG] GET http://10.128.15.240:5000
2024/03/27 19:49:02 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:49:02 [DEBUG] GET http://10.128.15.240:5000: retrying in 1s (4 left)
2024/03/27 19:49:03 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:49:03 [DEBUG] GET http://10.128.15.240:5000: retrying in 2s (3 left)
2024/03/27 19:49:05 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:49:05 [DEBUG] GET http://10.128.15.240:5000: retrying in 4s (2 left)
2024/03/27 19:49:09 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:49:09 [DEBUG] GET http://10.128.15.240:5000: retrying in 8s (1 left)
2024/03/27 19:49:17 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:49:18 [DEBUG] GET http://10.128.15.240:5000
2024/03/27 19:49:18 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:49:18 [DEBUG] GET http://10.128.15.240:5000: retrying in 1s (4 left)
2024/03/27 19:49:19 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:49:19 [DEBUG] GET http://10.128.15.240:5000: retrying in 2s (3 left)
2024/03/27 19:49:21 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:49:21 [DEBUG] GET http://10.128.15.240:5000: retrying in 4s (2 left)
2024/03/27 19:49:25 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:49:25 [DEBUG] GET http://10.128.15.240:5000: retrying in 8s (1 left)
2024/03/27 19:49:33 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:49:35 [DEBUG] GET http://10.128.15.240:5000
2024/03/27 19:49:35 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:49:35 [DEBUG] GET http://10.128.15.240:5000: retrying in 1s (4 left)
2024/03/27 19:49:36 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:49:36 [DEBUG] GET http://10.128.15.240:5000: retrying in 2s (3 left)
2024/03/27 19:49:38 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:49:38 [DEBUG] GET http://10.128.15.240:5000: retrying in 4s (2 left)
2024/03/27 19:49:42 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:49:42 [DEBUG] GET http://10.128.15.240:5000: retrying in 8s (1 left)
2024/03/27 19:49:50 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:49:52 [DEBUG] GET http://10.128.15.240:5000
2024/03/27 19:49:52 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:49:52 [DEBUG] GET http://10.128.15.240:5000: retrying in 1s (4 left)
2024/03/27 19:49:53 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:49:53 [DEBUG] GET http://10.128.15.240:5000: retrying in 2s (3 left)
2024/03/27 19:49:55 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:49:55 [DEBUG] GET http://10.128.15.240:5000: retrying in 4s (2 left)
2024/03/27 19:49:59 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:49:59 [DEBUG] GET http://10.128.15.240:5000: retrying in 8s (1 left)
2024/03/27 19:50:07 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:50:08 [DEBUG] GET http://10.128.15.240:5000
2024/03/27 19:50:08 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:50:08 [DEBUG] GET http://10.128.15.240:5000: retrying in 1s (4 left)
2024/03/27 19:50:09 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:50:09 [DEBUG] GET http://10.128.15.240:5000: retrying in 2s (3 left)
2024/03/27 19:50:11 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:50:11 [DEBUG] GET http://10.128.15.240:5000: retrying in 4s (2 left)
2024/03/27 19:50:15 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:50:15 [DEBUG] GET http://10.128.15.240:5000: retrying in 8s (1 left)
2024/03/27 19:50:23 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:50:25 [DEBUG] GET http://10.128.15.240:5000
2024/03/27 19:50:25 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:50:25 [DEBUG] GET http://10.128.15.240:5000: retrying in 1s (4 left)
2024/03/27 19:50:26 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:50:26 [DEBUG] GET http://10.128.15.240:5000: retrying in 2s (3 left)
2024/03/27 19:50:28 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:50:28 [DEBUG] GET http://10.128.15.240:5000: retrying in 4s (2 left)
2024/03/27 19:50:32 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:50:32 [DEBUG] GET http://10.128.15.240:5000: retrying in 8s (1 left)
2024/03/27 19:50:40 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:50:48 [DEBUG] GET http://10.128.15.240:5000
2024/03/27 19:50:48 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:50:48 [DEBUG] GET http://10.128.15.240:5000: retrying in 1s (4 left)
2024/03/27 19:50:49 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:50:49 [DEBUG] GET http://10.128.15.240:5000: retrying in 2s (3 left)
2024/03/27 19:50:51 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:50:51 [DEBUG] GET http://10.128.15.240:5000: retrying in 4s (2 left)
2024/03/27 19:50:55 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:50:55 [DEBUG] GET http://10.128.15.240:5000: retrying in 8s (1 left)
2024/03/27 19:51:03 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:51:12 [DEBUG] GET http://10.128.15.240:5000
2024/03/27 19:51:12 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:51:12 [DEBUG] GET http://10.128.15.240:5000: retrying in 1s (4 left)
2024/03/27 19:51:13 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:51:13 [DEBUG] GET http://10.128.15.240:5000: retrying in 2s (3 left)
2024/03/27 19:51:15 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:51:15 [DEBUG] GET http://10.128.15.240:5000: retrying in 4s (2 left)
2024/03/27 19:51:19 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
2024/03/27 19:51:19 [DEBUG] GET http://10.128.15.240:5000: retrying in 8s (1 left)
2024/03/27 19:51:27 [ERR] GET http://10.128.15.240:5000 request failed: Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
addons_test.go:385: failed to check external access to http://10.128.15.240:5000: GET http://10.128.15.240:5000 giving up after 5 attempt(s): Get "http://10.128.15.240:5000": dial tcp 10.128.15.240:5000: connect: connection refused
addons_test.go:388: (dbg) Run: out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p minikube logs -n 25: (1.035851996s)
helpers_test.go:252: TestAddons/parallel/Registry logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------|----------|---------|----------------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------|----------|---------|----------------|---------------------|---------------------|
| start | -o=json --download-only | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:45 UTC | |
| | -p minikube --force | | | | | |
| | --alsologtostderr | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | --all | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:45 UTC | 27 Mar 24 19:45 UTC |
| delete | -p minikube | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:45 UTC | 27 Mar 24 19:45 UTC |
| start | -o=json --download-only | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:45 UTC | |
| | -p minikube --force | | | | | |
| | --alsologtostderr | | | | | |
| | --kubernetes-version=v1.29.3 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | --all | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:46 UTC | 27 Mar 24 19:46 UTC |
| delete | -p minikube | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:46 UTC | 27 Mar 24 19:46 UTC |
| start | -o=json --download-only -p | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:46 UTC | |
| | minikube --force --alsologtostderr | | | | | |
| | --kubernetes-version=v1.30.0-beta.0 | | | | | |
| | --container-runtime=docker | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | --all | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:46 UTC | 27 Mar 24 19:46 UTC |
| delete | -p minikube | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:46 UTC | 27 Mar 24 19:46 UTC |
| delete | -p minikube | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:46 UTC | 27 Mar 24 19:46 UTC |
| delete | -p minikube | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:46 UTC | 27 Mar 24 19:46 UTC |
| delete | -p minikube | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:46 UTC | 27 Mar 24 19:46 UTC |
| start | --download-only -p | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:46 UTC | |
| | minikube --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:43581 | | | | | |
| | --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | -p minikube | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:46 UTC | 27 Mar 24 19:46 UTC |
| start | -p minikube --alsologtostderr | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:46 UTC | 27 Mar 24 19:46 UTC |
| | -v=1 --memory=2048 | | | | | |
| | --wait=true --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| delete | -p minikube | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:46 UTC | 27 Mar 24 19:46 UTC |
| addons | enable dashboard -p minikube | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:46 UTC | |
| addons | disable dashboard -p minikube | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:46 UTC | |
| start | -p minikube --wait=true | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:46 UTC | 27 Mar 24 19:48 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --driver=none | | | | | |
| | --bootstrapper=kubeadm | | | | | |
| | --addons=helm-tiller | | | | | |
| ip | minikube ip | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:48 UTC | 27 Mar 24 19:48 UTC |
| addons | minikube addons disable | minikube | jenkins | v1.33.0-beta.0 | 27 Mar 24 19:51 UTC | 27 Mar 24 19:51 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
|---------|--------------------------------------|----------|---------|----------------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/03/27 19:46:54
Running on machine: ubuntu-20-agent-15
Binary: Built with gc go1.22.1 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0327 19:46:54.297113 785442 out.go:291] Setting OutFile to fd 1 ...
I0327 19:46:54.297417 785442 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 19:46:54.297428 785442 out.go:304] Setting ErrFile to fd 2...
I0327 19:46:54.297432 785442 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 19:46:54.297666 785442 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17735-771440/.minikube/bin
I0327 19:46:54.299145 785442 out.go:298] Setting JSON to false
I0327 19:46:54.300396 785442 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12552,"bootTime":1711556262,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0327 19:46:54.300480 785442 start.go:139] virtualization: kvm guest
I0327 19:46:54.302653 785442 out.go:177] * minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
I0327 19:46:54.304797 785442 notify.go:220] Checking for updates...
I0327 19:46:54.304811 785442 out.go:177] - MINIKUBE_LOCATION=17735
W0327 19:46:54.304730 785442 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/17735-771440/.minikube/cache/preloaded-tarball: no such file or directory
I0327 19:46:54.306386 785442 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0327 19:46:54.308073 785442 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/17735-771440/kubeconfig
I0327 19:46:54.309694 785442 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/17735-771440/.minikube
I0327 19:46:54.311064 785442 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0327 19:46:54.312456 785442 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0327 19:46:54.313975 785442 driver.go:392] Setting default libvirt URI to qemu:///system
I0327 19:46:54.326966 785442 out.go:177] * Using the none driver based on user configuration
I0327 19:46:54.328502 785442 start.go:297] selected driver: none
I0327 19:46:54.328523 785442 start.go:901] validating driver "none" against <nil>
I0327 19:46:54.328542 785442 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0327 19:46:54.328575 785442 start.go:1733] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
W0327 19:46:54.328897 785442 out.go:239] ! The 'none' driver does not respect the --memory flag
I0327 19:46:54.329397 785442 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0327 19:46:54.329627 785442 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0327 19:46:54.329697 785442 cni.go:84] Creating CNI manager for ""
I0327 19:46:54.329711 785442 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0327 19:46:54.329727 785442 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0327 19:46:54.329771 785442 start.go:340] cluster config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0327 19:46:54.331429 785442 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
I0327 19:46:54.333098 785442 profile.go:143] Saving config to /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/config.json ...
I0327 19:46:54.333138 785442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/config.json: {Name:mkc12f016488e18252a34aa57adffbeb5566b2c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0327 19:46:54.333273 785442 start.go:360] acquireMachinesLock for minikube: {Name:mk84f2ad31410d090434f21fe1137802c30e2ddd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0327 19:46:54.333306 785442 start.go:364] duration metric: took 18.881µs to acquireMachinesLock for "minikube"
I0327 19:46:54.333319 785442 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0327 19:46:54.333381 785442 start.go:125] createHost starting for "" (driver="none")
I0327 19:46:54.335042 785442 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
I0327 19:46:54.336368 785442 exec_runner.go:51] Run: systemctl --version
I0327 19:46:54.339074 785442 start.go:159] libmachine.API.Create for "minikube" (driver="none")
I0327 19:46:54.339114 785442 client.go:168] LocalClient.Create starting
I0327 19:46:54.339169 785442 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17735-771440/.minikube/certs/ca.pem
I0327 19:46:54.339206 785442 main.go:141] libmachine: Decoding PEM data...
I0327 19:46:54.339226 785442 main.go:141] libmachine: Parsing certificate...
I0327 19:46:54.339278 785442 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17735-771440/.minikube/certs/cert.pem
I0327 19:46:54.339301 785442 main.go:141] libmachine: Decoding PEM data...
I0327 19:46:54.339313 785442 main.go:141] libmachine: Parsing certificate...
I0327 19:46:54.339627 785442 client.go:171] duration metric: took 504.972µs to LocalClient.Create
I0327 19:46:54.339653 785442 start.go:167] duration metric: took 583.18µs to libmachine.API.Create "minikube"
I0327 19:46:54.339668 785442 start.go:293] postStartSetup for "minikube" (driver="none")
I0327 19:46:54.339709 785442 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0327 19:46:54.339751 785442 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0327 19:46:54.348006 785442 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0327 19:46:54.348035 785442 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0327 19:46:54.348045 785442 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0327 19:46:54.350068 785442 out.go:177] * OS release is Ubuntu 20.04.6 LTS
I0327 19:46:54.351299 785442 filesync.go:126] Scanning /home/jenkins/minikube-integration/17735-771440/.minikube/addons for local assets ...
I0327 19:46:54.351358 785442 filesync.go:126] Scanning /home/jenkins/minikube-integration/17735-771440/.minikube/files for local assets ...
I0327 19:46:54.351378 785442 start.go:296] duration metric: took 11.699938ms for postStartSetup
I0327 19:46:54.351970 785442 profile.go:143] Saving config to /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/config.json ...
I0327 19:46:54.352102 785442 start.go:128] duration metric: took 18.709592ms to createHost
I0327 19:46:54.352117 785442 start.go:83] releasing machines lock for "minikube", held for 18.803417ms
I0327 19:46:54.352457 785442 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0327 19:46:54.352536 785442 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
W0327 19:46:54.354478 785442 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0327 19:46:54.354517 785442 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0327 19:46:54.365437 785442 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0327 19:46:54.365470 785442 start.go:494] detecting cgroup driver to use...
I0327 19:46:54.365501 785442 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0327 19:46:54.365662 785442 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0327 19:46:54.387549 785442 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0327 19:46:54.398146 785442 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0327 19:46:54.408684 785442 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0327 19:46:54.408777 785442 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0327 19:46:54.418207 785442 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0327 19:46:54.426947 785442 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0327 19:46:54.438080 785442 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0327 19:46:54.449398 785442 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0327 19:46:54.458364 785442 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0327 19:46:54.468297 785442 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0327 19:46:54.478156 785442 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0327 19:46:54.514427 785442 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0327 19:46:54.524589 785442 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0327 19:46:54.533343 785442 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0327 19:46:54.734315 785442 exec_runner.go:51] Run: sudo systemctl restart containerd
I0327 19:46:54.796360 785442 start.go:494] detecting cgroup driver to use...
I0327 19:46:54.796421 785442 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0327 19:46:54.796556 785442 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0327 19:46:54.817115 785442 exec_runner.go:51] Run: which cri-dockerd
I0327 19:46:54.818139 785442 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0327 19:46:54.827471 785442 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
I0327 19:46:54.827499 785442 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0327 19:46:54.827551 785442 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0327 19:46:54.835777 785442 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0327 19:46:54.835957 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube469473553 /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0327 19:46:54.844359 785442 exec_runner.go:51] Run: sudo systemctl unmask docker.service
I0327 19:46:55.048434 785442 exec_runner.go:51] Run: sudo systemctl enable docker.socket
I0327 19:46:55.259024 785442 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0327 19:46:55.259213 785442 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
I0327 19:46:55.259230 785442 exec_runner.go:203] rm: /etc/docker/daemon.json
I0327 19:46:55.259278 785442 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
I0327 19:46:55.271662 785442 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
I0327 19:46:55.271908 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1453567436 /etc/docker/daemon.json
I0327 19:46:55.282342 785442 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0327 19:46:55.511690 785442 exec_runner.go:51] Run: sudo systemctl restart docker
I0327 19:46:55.785065 785442 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0327 19:46:55.796163 785442 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
I0327 19:46:55.811325 785442 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
I0327 19:46:55.821978 785442 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
I0327 19:46:56.022280 785442 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
I0327 19:46:56.220580 785442 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0327 19:46:56.429336 785442 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
I0327 19:46:56.445339 785442 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
I0327 19:46:56.456952 785442 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0327 19:46:56.680854 785442 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
I0327 19:46:56.751285 785442 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0327 19:46:56.751378 785442 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
I0327 19:46:56.752850 785442 start.go:562] Will wait 60s for crictl version
I0327 19:46:56.752928 785442 exec_runner.go:51] Run: which crictl
I0327 19:46:56.753970 785442 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
I0327 19:46:56.798136 785442 start.go:578] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 26.0.0
RuntimeApiVersion: v1
I0327 19:46:56.798205 785442 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0327 19:46:56.819045 785442 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0327 19:46:56.842572 785442 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
I0327 19:46:56.842651 785442 exec_runner.go:51] Run: grep 127.0.0.1 host.minikube.internal$ /etc/hosts
I0327 19:46:56.845432 785442 out.go:177] - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
I0327 19:46:56.846891 785442 kubeadm.go:877] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDom
ain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.128.15.240 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0327 19:46:56.847015 785442 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
I0327 19:46:56.847029 785442 kubeadm.go:928] updating node { 10.128.15.240 8443 v1.29.3 docker true true} ...
I0327 19:46:56.847139 785442 kubeadm.go:940] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-15 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.128.15.240 --resolv-conf=/run/systemd/resolve/resolv.conf
[Install]
config:
{KubernetesVersion:v1.29.3 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
I0327 19:46:56.847192 785442 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
I0327 19:46:56.894393 785442 cni.go:84] Creating CNI manager for ""
I0327 19:46:56.894427 785442 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0327 19:46:56.894438 785442 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0327 19:46:56.894475 785442 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.128.15.240 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-15 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.128.15.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.128.15.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0327 19:46:56.894642 785442 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.128.15.240
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "ubuntu-20-agent-15"
kubeletExtraArgs:
node-ip: 10.128.15.240
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "10.128.15.240"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.29.3
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0327 19:46:56.894704 785442 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
I0327 19:46:56.902934 785442 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: exit status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
Initiating transfer...
I0327 19:46:56.902990 785442 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
I0327 19:46:56.911456 785442 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
I0327 19:46:56.911470 785442 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256
I0327 19:46:56.911482 785442 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256
I0327 19:46:56.911501 785442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17735-771440/.minikube/cache/linux/amd64/v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
I0327 19:46:56.911535 785442 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I0327 19:46:56.911546 785442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17735-771440/.minikube/cache/linux/amd64/v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
I0327 19:46:56.923953 785442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17735-771440/.minikube/cache/linux/amd64/v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
I0327 19:46:56.954436 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3780611543 /var/lib/minikube/binaries/v1.29.3/kubectl
I0327 19:46:56.970895 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube201494140 /var/lib/minikube/binaries/v1.29.3/kubeadm
I0327 19:46:57.051291 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4245865079 /var/lib/minikube/binaries/v1.29.3/kubelet
I0327 19:46:57.139749 785442 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0327 19:46:57.148685 785442 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
I0327 19:46:57.148707 785442 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0327 19:46:57.148742 785442 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0327 19:46:57.156979 785442 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (365 bytes)
I0327 19:46:57.157155 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3168944734 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0327 19:46:57.165811 785442 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
I0327 19:46:57.165865 785442 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
I0327 19:46:57.165907 785442 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
I0327 19:46:57.173594 785442 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0327 19:46:57.173741 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2299999754 /lib/systemd/system/kubelet.service
I0327 19:46:57.182601 785442 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
I0327 19:46:57.182727 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1320304341 /var/tmp/minikube/kubeadm.yaml.new
I0327 19:46:57.191495 785442 exec_runner.go:51] Run: grep 10.128.15.240 control-plane.minikube.internal$ /etc/hosts
I0327 19:46:57.192829 785442 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0327 19:46:57.404187 785442 exec_runner.go:51] Run: sudo systemctl start kubelet
I0327 19:46:57.417961 785442 certs.go:68] Setting up /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube for IP: 10.128.15.240
I0327 19:46:57.417986 785442 certs.go:194] generating shared ca certs ...
I0327 19:46:57.418005 785442 certs.go:226] acquiring lock for ca certs: {Name:mk49622af302dd5fe131a9430f1e35c7c09bed3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0327 19:46:57.418175 785442 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17735-771440/.minikube/ca.key
I0327 19:46:57.418229 785442 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17735-771440/.minikube/proxy-client-ca.key
I0327 19:46:57.418242 785442 certs.go:256] generating profile certs ...
I0327 19:46:57.418317 785442 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/client.key
I0327 19:46:57.418338 785442 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/client.crt with IP's: []
I0327 19:46:57.547168 785442 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/client.crt ...
I0327 19:46:57.547204 785442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/client.crt: {Name:mk87f5f426e4a0e3131a1f1fd9ae6dbcf7a19426 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0327 19:46:57.547380 785442 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/client.key ...
I0327 19:46:57.547395 785442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/client.key: {Name:mk65048bbdd8fd3ae6de6c6f48065f5c0dd6a82f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0327 19:46:57.547481 785442 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/apiserver.key.271ff23d
I0327 19:46:57.547498 785442 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/apiserver.crt.271ff23d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.128.15.240]
I0327 19:46:57.718278 785442 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/apiserver.crt.271ff23d ...
I0327 19:46:57.718313 785442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/apiserver.crt.271ff23d: {Name:mk159a4f77c97c05a459f1d9737dd9a3dd096860 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0327 19:46:57.718481 785442 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/apiserver.key.271ff23d ...
I0327 19:46:57.718504 785442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/apiserver.key.271ff23d: {Name:mk9aa3a1b68d7ecfb7f221c03b8c794d334e3058 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0327 19:46:57.718583 785442 certs.go:381] copying /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/apiserver.crt.271ff23d -> /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/apiserver.crt
I0327 19:46:57.718704 785442 certs.go:385] copying /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/apiserver.key.271ff23d -> /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/apiserver.key
I0327 19:46:57.718787 785442 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/proxy-client.key
I0327 19:46:57.718811 785442 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0327 19:46:57.897576 785442 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/proxy-client.crt ...
I0327 19:46:57.897622 785442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/proxy-client.crt: {Name:mka1edbad7a6d97a670e10222a0268c2708c7c5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0327 19:46:57.897799 785442 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/proxy-client.key ...
I0327 19:46:57.897821 785442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/proxy-client.key: {Name:mk69a9bfba947e9c4681f82332e9a482b7546864 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0327 19:46:57.898050 785442 certs.go:484] found cert: /home/jenkins/minikube-integration/17735-771440/.minikube/certs/ca-key.pem (1679 bytes)
I0327 19:46:57.898107 785442 certs.go:484] found cert: /home/jenkins/minikube-integration/17735-771440/.minikube/certs/ca.pem (1078 bytes)
I0327 19:46:57.898152 785442 certs.go:484] found cert: /home/jenkins/minikube-integration/17735-771440/.minikube/certs/cert.pem (1123 bytes)
I0327 19:46:57.898177 785442 certs.go:484] found cert: /home/jenkins/minikube-integration/17735-771440/.minikube/certs/key.pem (1679 bytes)
I0327 19:46:57.898893 785442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17735-771440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0327 19:46:57.899047 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3511566354 /var/lib/minikube/certs/ca.crt
I0327 19:46:57.908180 785442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17735-771440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0327 19:46:57.908302 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube423390180 /var/lib/minikube/certs/ca.key
I0327 19:46:57.916481 785442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17735-771440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0327 19:46:57.916635 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3343948971 /var/lib/minikube/certs/proxy-client-ca.crt
I0327 19:46:57.925132 785442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17735-771440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0327 19:46:57.925250 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube831950845 /var/lib/minikube/certs/proxy-client-ca.key
I0327 19:46:57.934151 785442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
I0327 19:46:57.934275 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3350324935 /var/lib/minikube/certs/apiserver.crt
I0327 19:46:57.941918 785442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0327 19:46:57.942037 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3478674603 /var/lib/minikube/certs/apiserver.key
I0327 19:46:57.949469 785442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0327 19:46:57.949608 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube113090513 /var/lib/minikube/certs/proxy-client.crt
I0327 19:46:57.956926 785442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17735-771440/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0327 19:46:57.957071 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3982661572 /var/lib/minikube/certs/proxy-client.key
I0327 19:46:57.965023 785442 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
I0327 19:46:57.965044 785442 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
I0327 19:46:57.965080 785442 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
I0327 19:46:57.972890 785442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17735-771440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0327 19:46:57.973034 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1935575466 /usr/share/ca-certificates/minikubeCA.pem
I0327 19:46:57.980327 785442 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (752 bytes)
I0327 19:46:57.980438 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1398604646 /var/lib/minikube/kubeconfig
I0327 19:46:57.988456 785442 exec_runner.go:51] Run: openssl version
I0327 19:46:57.991178 785442 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0327 19:46:57.999642 785442 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0327 19:46:58.000836 785442 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Mar 27 19:46 /usr/share/ca-certificates/minikubeCA.pem
I0327 19:46:58.000875 785442 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0327 19:46:58.003687 785442 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0327 19:46:58.012131 785442 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0327 19:46:58.013189 785442 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0327 19:46:58.013230 785442 kubeadm.go:391] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.43-beta.0@sha256:185c97a62a2e62a78b853e29e445f05ffbcf36149614c192af3643aa3888c4e8 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.128.15.240 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0327 19:46:58.013357 785442 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0327 19:46:58.028166 785442 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0327 19:46:58.037323 785442 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0327 19:46:58.045783 785442 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
I0327 19:46:58.065867 785442 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0327 19:46:58.074622 785442 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0327 19:46:58.074647 785442 kubeadm.go:156] found existing configuration files:
I0327 19:46:58.074690 785442 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0327 19:46:58.083008 785442 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0327 19:46:58.083073 785442 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
I0327 19:46:58.093599 785442 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0327 19:46:58.101612 785442 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0327 19:46:58.101673 785442 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0327 19:46:58.109156 785442 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0327 19:46:58.117274 785442 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0327 19:46:58.117320 785442 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0327 19:46:58.125502 785442 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0327 19:46:58.133647 785442 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0327 19:46:58.133718 785442 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0327 19:46:58.143565 785442 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0327 19:46:58.187062 785442 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
I0327 19:46:58.187123 785442 kubeadm.go:309] [preflight] Running pre-flight checks
I0327 19:46:58.317646 785442 kubeadm.go:309] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0327 19:46:58.317702 785442 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
I0327 19:46:58.317718 785442 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0327 19:46:58.317724 785442 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0327 19:46:58.628749 785442 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0327 19:46:58.631820 785442 out.go:204] - Generating certificates and keys ...
I0327 19:46:58.631875 785442 kubeadm.go:309] [certs] Using existing ca certificate authority
I0327 19:46:58.631892 785442 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
I0327 19:46:58.830610 785442 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
I0327 19:46:59.063309 785442 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
I0327 19:46:59.228988 785442 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
I0327 19:46:59.336539 785442 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
I0327 19:46:59.466481 785442 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
I0327 19:46:59.466513 785442 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-15] and IPs [10.128.15.240 127.0.0.1 ::1]
I0327 19:46:59.737638 785442 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
I0327 19:46:59.737764 785442 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-15] and IPs [10.128.15.240 127.0.0.1 ::1]
I0327 19:46:59.957514 785442 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
I0327 19:47:00.082020 785442 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
I0327 19:47:00.503282 785442 kubeadm.go:309] [certs] Generating "sa" key and public key
I0327 19:47:00.503426 785442 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0327 19:47:00.634626 785442 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
I0327 19:47:01.062092 785442 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0327 19:47:01.146869 785442 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0327 19:47:01.258176 785442 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0327 19:47:01.349045 785442 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0327 19:47:01.349522 785442 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0327 19:47:01.352618 785442 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0327 19:47:01.355243 785442 out.go:204] - Booting up control plane ...
I0327 19:47:01.355276 785442 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0327 19:47:01.355301 785442 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0327 19:47:01.355315 785442 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0327 19:47:01.371856 785442 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0327 19:47:01.372696 785442 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0327 19:47:01.372720 785442 kubeadm.go:309] [kubelet-start] Starting the kubelet
I0327 19:47:01.585355 785442 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0327 19:47:07.087662 785442 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.502273 seconds
I0327 19:47:07.102275 785442 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0327 19:47:07.114131 785442 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0327 19:47:07.635898 785442 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
I0327 19:47:07.635927 785442 kubeadm.go:309] [mark-control-plane] Marking the node ubuntu-20-agent-15 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0327 19:47:08.145338 785442 kubeadm.go:309] [bootstrap-token] Using token: bv13wn.i50u7bhta9ujrc85
I0327 19:47:08.147207 785442 out.go:204] - Configuring RBAC rules ...
I0327 19:47:08.147250 785442 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0327 19:47:08.152619 785442 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0327 19:47:08.159958 785442 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0327 19:47:08.162979 785442 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0327 19:47:08.166099 785442 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0327 19:47:08.169479 785442 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0327 19:47:08.180439 785442 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0327 19:47:08.553958 785442 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
I0327 19:47:08.580529 785442 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
I0327 19:47:08.581520 785442 kubeadm.go:309]
I0327 19:47:08.581538 785442 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
I0327 19:47:08.581542 785442 kubeadm.go:309]
I0327 19:47:08.581546 785442 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
I0327 19:47:08.581550 785442 kubeadm.go:309]
I0327 19:47:08.581553 785442 kubeadm.go:309] mkdir -p $HOME/.kube
I0327 19:47:08.581557 785442 kubeadm.go:309] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0327 19:47:08.581580 785442 kubeadm.go:309] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0327 19:47:08.581584 785442 kubeadm.go:309]
I0327 19:47:08.581588 785442 kubeadm.go:309] Alternatively, if you are the root user, you can run:
I0327 19:47:08.581592 785442 kubeadm.go:309]
I0327 19:47:08.581597 785442 kubeadm.go:309] export KUBECONFIG=/etc/kubernetes/admin.conf
I0327 19:47:08.581600 785442 kubeadm.go:309]
I0327 19:47:08.581605 785442 kubeadm.go:309] You should now deploy a pod network to the cluster.
I0327 19:47:08.581609 785442 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0327 19:47:08.581613 785442 kubeadm.go:309] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0327 19:47:08.581616 785442 kubeadm.go:309]
I0327 19:47:08.581618 785442 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
I0327 19:47:08.581621 785442 kubeadm.go:309] and service account keys on each node and then running the following as root:
I0327 19:47:08.581624 785442 kubeadm.go:309]
I0327 19:47:08.581627 785442 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token bv13wn.i50u7bhta9ujrc85 \
I0327 19:47:08.581630 785442 kubeadm.go:309] --discovery-token-ca-cert-hash sha256:476a7cd2dcbebb6a8f56145e16668b3c5b6b5cfe98b74adc4ab35b9910ca8ec9 \
I0327 19:47:08.581632 785442 kubeadm.go:309] --control-plane
I0327 19:47:08.581635 785442 kubeadm.go:309]
I0327 19:47:08.581638 785442 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
I0327 19:47:08.581640 785442 kubeadm.go:309]
I0327 19:47:08.581643 785442 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token bv13wn.i50u7bhta9ujrc85 \
I0327 19:47:08.581652 785442 kubeadm.go:309] --discovery-token-ca-cert-hash sha256:476a7cd2dcbebb6a8f56145e16668b3c5b6b5cfe98b74adc4ab35b9910ca8ec9
I0327 19:47:08.585272 785442 cni.go:84] Creating CNI manager for ""
I0327 19:47:08.585299 785442 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0327 19:47:08.587572 785442 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0327 19:47:08.589088 785442 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
I0327 19:47:08.599776 785442 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
I0327 19:47:08.599938 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2864429203 /etc/cni/net.d/1-k8s.conflist
I0327 19:47:08.610682 785442 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0327 19:47:08.610777 785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0327 19:47:08.610789 785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-15 minikube.k8s.io/updated_at=2024_03_27T19_47_08_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=fd5228225874e763d59e7e8bf88a02e145755a81 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
I0327 19:47:08.620564 785442 ops.go:34] apiserver oom_adj: -16
I0327 19:47:08.710270 785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0327 19:47:09.210914 785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0327 19:47:09.710359 785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0327 19:47:10.210424 785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0327 19:47:10.710890 785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0327 19:47:11.211246 785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0327 19:47:11.711364 785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0327 19:47:12.211117 785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0327 19:47:12.711204 785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0327 19:47:13.210416 785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0327 19:47:13.710747 785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0327 19:47:14.211045 785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0327 19:47:14.710669 785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0327 19:47:15.211156 785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0327 19:47:15.710411 785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0327 19:47:16.210652 785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0327 19:47:16.711035 785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0327 19:47:17.211395 785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0327 19:47:17.710626 785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0327 19:47:18.210496 785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0327 19:47:18.710873 785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0327 19:47:19.211105 785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0327 19:47:19.711191 785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0327 19:47:20.210751 785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0327 19:47:20.711118 785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0327 19:47:21.211041 785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0327 19:47:21.711025 785442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0327 19:47:21.788484 785442 kubeadm.go:1107] duration metric: took 13.177793351s to wait for elevateKubeSystemPrivileges
W0327 19:47:21.788531 785442 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
I0327 19:47:21.788574 785442 kubeadm.go:393] duration metric: took 23.775311524s to StartCluster
I0327 19:47:21.788601 785442 settings.go:142] acquiring lock: {Name:mk6aaa0aa244fc49fbd9078e2807c923dc87e9b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0327 19:47:21.788676 785442 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/17735-771440/kubeconfig
I0327 19:47:21.789457 785442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17735-771440/kubeconfig: {Name:mkcbe4a4107c2ed93be9cf8bf198b7dda208e9d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0327 19:47:21.789694 785442 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0327 19:47:21.789780 785442 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
I0327 19:47:21.789920 785442 addons.go:69] Setting yakd=true in profile "minikube"
I0327 19:47:21.789931 785442 addons.go:69] Setting helm-tiller=true in profile "minikube"
I0327 19:47:21.789947 785442 addons.go:69] Setting registry=true in profile "minikube"
I0327 19:47:21.789968 785442 addons.go:234] Setting addon yakd=true in "minikube"
I0327 19:47:21.789980 785442 addons.go:69] Setting cloud-spanner=true in profile "minikube"
I0327 19:47:21.790000 785442 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 19:47:21.790019 785442 host.go:66] Checking if "minikube" exists ...
I0327 19:47:21.790033 785442 addons.go:234] Setting addon registry=true in "minikube"
I0327 19:47:21.790046 785442 addons.go:69] Setting volumesnapshots=true in profile "minikube"
I0327 19:47:21.790073 785442 addons.go:234] Setting addon volumesnapshots=true in "minikube"
I0327 19:47:21.790087 785442 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
I0327 19:47:21.790064 785442 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
I0327 19:47:21.790112 785442 addons.go:69] Setting storage-provisioner=true in profile "minikube"
I0327 19:47:21.790113 785442 addons.go:234] Setting addon cloud-spanner=true in "minikube"
I0327 19:47:21.790119 785442 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
I0327 19:47:21.790139 785442 host.go:66] Checking if "minikube" exists ...
I0327 19:47:21.790140 785442 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
I0327 19:47:21.790150 785442 addons.go:69] Setting default-storageclass=true in profile "minikube"
I0327 19:47:21.790159 785442 host.go:66] Checking if "minikube" exists ...
I0327 19:47:21.790160 785442 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
I0327 19:47:21.790176 785442 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0327 19:47:21.790205 785442 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
I0327 19:47:21.790274 785442 host.go:66] Checking if "minikube" exists ...
I0327 19:47:21.790687 785442 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
I0327 19:47:21.790711 785442 api_server.go:166] Checking apiserver status ...
I0327 19:47:21.790729 785442 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
I0327 19:47:21.790739 785442 api_server.go:166] Checking apiserver status ...
I0327 19:47:21.790764 785442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0327 19:47:21.790772 785442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0327 19:47:21.790806 785442 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
I0327 19:47:21.790102 785442 host.go:66] Checking if "minikube" exists ...
I0327 19:47:21.790826 785442 api_server.go:166] Checking apiserver status ...
I0327 19:47:21.790839 785442 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
I0327 19:47:21.790850 785442 api_server.go:166] Checking apiserver status ...
I0327 19:47:21.790874 785442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0327 19:47:21.790876 785442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0327 19:47:21.790937 785442 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
I0327 19:47:21.790953 785442 api_server.go:166] Checking apiserver status ...
I0327 19:47:21.790990 785442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0327 19:47:21.791115 785442 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
I0327 19:47:21.790076 785442 host.go:66] Checking if "minikube" exists ...
I0327 19:47:21.791190 785442 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
I0327 19:47:21.791223 785442 host.go:66] Checking if "minikube" exists ...
I0327 19:47:21.790142 785442 addons.go:234] Setting addon storage-provisioner=true in "minikube"
I0327 19:47:21.791942 785442 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
I0327 19:47:21.791965 785442 api_server.go:166] Checking apiserver status ...
I0327 19:47:21.791995 785442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0327 19:47:21.792011 785442 host.go:66] Checking if "minikube" exists ...
I0327 19:47:21.793824 785442 out.go:177] * Configuring local host environment ...
I0327 19:47:21.790091 785442 addons.go:69] Setting metrics-server=true in profile "minikube"
I0327 19:47:21.790812 785442 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
I0327 19:47:21.789968 785442 addons.go:234] Setting addon helm-tiller=true in "minikube"
I0327 19:47:21.792337 785442 addons.go:69] Setting gcp-auth=true in profile "minikube"
I0327 19:47:21.792754 785442 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
W0327 19:47:21.795506 785442 out.go:239] *
W0327 19:47:21.795525 785442 out.go:239] ! The 'none' driver is designed for experts who need to integrate with an existing VM
W0327 19:47:21.795538 785442 out.go:239] * Most users should use the newer 'docker' driver instead, which does not require root!
W0327 19:47:21.795546 785442 out.go:239] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
W0327 19:47:21.795553 785442 out.go:239] *
I0327 19:47:21.793967 785442 addons.go:234] Setting addon metrics-server=true in "minikube"
W0327 19:47:21.795598 785442 out.go:239] ! kubectl and minikube configuration will be stored in /home/jenkins
W0327 19:47:21.795612 785442 out.go:239] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
W0327 19:47:21.795622 785442 out.go:239] *
I0327 19:47:21.795625 785442 host.go:66] Checking if "minikube" exists ...
I0327 19:47:21.793981 785442 api_server.go:166] Checking apiserver status ...
I0327 19:47:21.822769 785442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0327 19:47:21.822827 785442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/787016/cgroup
I0327 19:47:21.822997 785442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/787016/cgroup
I0327 19:47:21.794016 785442 host.go:66] Checking if "minikube" exists ...
I0327 19:47:21.823444 785442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/787016/cgroup
I0327 19:47:21.823763 785442 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
I0327 19:47:21.823798 785442 api_server.go:166] Checking apiserver status ...
I0327 19:47:21.823837 785442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0327 19:47:21.824103 785442 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
I0327 19:47:21.824132 785442 api_server.go:166] Checking apiserver status ...
I0327 19:47:21.824165 785442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0327 19:47:21.824383 785442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/787016/cgroup
I0327 19:47:21.824966 785442 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
I0327 19:47:21.824994 785442 api_server.go:166] Checking apiserver status ...
I0327 19:47:21.825029 785442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0327 19:47:21.825789 785442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/787016/cgroup
I0327 19:47:21.825826 785442 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
I0327 19:47:21.825897 785442 api_server.go:166] Checking apiserver status ...
I0327 19:47:21.825951 785442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0327 19:47:21.826399 785442 out.go:239] - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
W0327 19:47:21.826416 785442 out.go:239] - sudo chown -R $USER $HOME/.kube $HOME/.minikube
W0327 19:47:21.826428 785442 out.go:239] *
W0327 19:47:21.826439 785442 out.go:239] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
I0327 19:47:21.826541 785442 start.go:234] Will wait 6m0s for node &{Name: IP:10.128.15.240 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0327 19:47:21.794039 785442 mustload.go:65] Loading cluster: minikube
I0327 19:47:21.794047 785442 api_server.go:166] Checking apiserver status ...
I0327 19:47:21.829173 785442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0327 19:47:21.829251 785442 out.go:177] * Verifying Kubernetes components...
I0327 19:47:21.829722 785442 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 19:47:21.831156 785442 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0327 19:47:21.832694 785442 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
I0327 19:47:21.832720 785442 api_server.go:166] Checking apiserver status ...
I0327 19:47:21.832768 785442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0327 19:47:21.847110 785442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/787016/cgroup
I0327 19:47:21.847881 785442 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0"
I0327 19:47:21.847956 785442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0/freezer.state
I0327 19:47:21.849401 785442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/787016/cgroup
I0327 19:47:21.850284 785442 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0"
I0327 19:47:21.850347 785442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0/freezer.state
I0327 19:47:21.852392 785442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/787016/cgroup
I0327 19:47:21.856934 785442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/787016/cgroup
I0327 19:47:21.860833 785442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/787016/cgroup
I0327 19:47:21.865501 785442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/787016/cgroup
I0327 19:47:21.865539 785442 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0"
I0327 19:47:21.865574 785442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0/freezer.state
I0327 19:47:21.865750 785442 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0"
I0327 19:47:21.865501 785442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/787016/cgroup
I0327 19:47:21.865824 785442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0/freezer.state
I0327 19:47:21.870089 785442 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0"
I0327 19:47:21.870161 785442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0/freezer.state
I0327 19:47:21.870703 785442 api_server.go:204] freezer state: "THAWED"
I0327 19:47:21.870754 785442 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
I0327 19:47:21.873622 785442 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0"
I0327 19:47:21.873690 785442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0/freezer.state
I0327 19:47:21.877006 785442 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0"
I0327 19:47:21.877185 785442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0/freezer.state
I0327 19:47:21.877474 785442 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0"
I0327 19:47:21.877529 785442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0/freezer.state
I0327 19:47:21.878601 785442 api_server.go:204] freezer state: "THAWED"
I0327 19:47:21.878629 785442 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
I0327 19:47:21.881117 785442 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
ok
I0327 19:47:21.889502 785442 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0
I0327 19:47:21.883611 785442 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0"
I0327 19:47:21.884859 785442 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0"
I0327 19:47:21.887347 785442 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
ok
I0327 19:47:21.888039 785442 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0"
I0327 19:47:21.888214 785442 api_server.go:204] freezer state: "THAWED"
I0327 19:47:21.890326 785442 api_server.go:204] freezer state: "THAWED"
I0327 19:47:21.891724 785442 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
I0327 19:47:21.891774 785442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0/freezer.state
I0327 19:47:21.892054 785442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0/freezer.state
I0327 19:47:21.892340 785442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0/freezer.state
I0327 19:47:21.892491 785442 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
I0327 19:47:21.893001 785442 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
I0327 19:47:21.893234 785442 api_server.go:204] freezer state: "THAWED"
I0327 19:47:21.893254 785442 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
I0327 19:47:21.893034 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
I0327 19:47:21.893479 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2459498510 /etc/kubernetes/addons/ig-namespace.yaml
I0327 19:47:21.893597 785442 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
I0327 19:47:21.893637 785442 host.go:66] Checking if "minikube" exists ...
I0327 19:47:21.894586 785442 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
I0327 19:47:21.894605 785442 api_server.go:166] Checking apiserver status ...
I0327 19:47:21.894635 785442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0327 19:47:21.895974 785442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/787016/cgroup
I0327 19:47:21.899276 785442 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0"
I0327 19:47:21.899327 785442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0/freezer.state
I0327 19:47:21.901501 785442 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
ok
I0327 19:47:21.903798 785442 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
I0327 19:47:21.905345 785442 api_server.go:204] freezer state: "THAWED"
I0327 19:47:21.905372 785442 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
I0327 19:47:21.905482 785442 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
I0327 19:47:21.905514 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I0327 19:47:21.905648 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube657095272 /etc/kubernetes/addons/deployment.yaml
I0327 19:47:21.905739 785442 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
ok
I0327 19:47:21.907572 785442 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.4
I0327 19:47:21.909008 785442 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
ok
I0327 19:47:21.909244 785442 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
I0327 19:47:21.909349 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I0327 19:47:21.909519 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube572783290 /etc/kubernetes/addons/yakd-ns.yaml
I0327 19:47:21.911054 785442 api_server.go:204] freezer state: "THAWED"
I0327 19:47:21.911119 785442 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
I0327 19:47:21.915475 785442 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
ok
I0327 19:47:21.926809 785442 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I0327 19:47:21.919069 785442 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
ok
I0327 19:47:21.919105 785442 api_server.go:204] freezer state: "THAWED"
I0327 19:47:21.919837 785442 addons.go:234] Setting addon default-storageclass=true in "minikube"
I0327 19:47:21.921142 785442 api_server.go:204] freezer state: "THAWED"
I0327 19:47:21.925131 785442 api_server.go:204] freezer state: "THAWED"
I0327 19:47:21.929461 785442 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
I0327 19:47:21.929788 785442 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0327 19:47:21.929821 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0327 19:47:21.930026 785442 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
I0327 19:47:21.930545 785442 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
I0327 19:47:21.930565 785442 host.go:66] Checking if "minikube" exists ...
I0327 19:47:21.935868 785442 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I0327 19:47:21.931109 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2057918794 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0327 19:47:21.931279 785442 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
I0327 19:47:21.934625 785442 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
ok
I0327 19:47:21.935375 785442 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
I0327 19:47:21.937316 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
I0327 19:47:21.937429 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube22919953 /etc/kubernetes/addons/ig-serviceaccount.yaml
I0327 19:47:21.938891 785442 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
I0327 19:47:21.938919 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I0327 19:47:21.939019 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube816787765 /etc/kubernetes/addons/yakd-sa.yaml
I0327 19:47:21.942146 785442 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I0327 19:47:21.942419 785442 api_server.go:204] freezer state: "THAWED"
I0327 19:47:21.942500 785442 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
ok
I0327 19:47:21.940316 785442 api_server.go:166] Checking apiserver status ...
I0327 19:47:21.942906 785442 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
ok
I0327 19:47:21.943746 785442 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I0327 19:47:21.943771 785442 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
I0327 19:47:21.944927 785442 api_server.go:204] freezer state: "THAWED"
I0327 19:47:21.947900 785442 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
I0327 19:47:21.947997 785442 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 127.0.0.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0327 19:47:21.948126 785442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0327 19:47:21.948199 785442 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
I0327 19:47:21.952453 785442 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0327 19:47:21.952504 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I0327 19:47:21.952657 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2469551737 /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0327 19:47:21.951435 785442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0327 19:47:21.951957 785442 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0"
I0327 19:47:21.954665 785442 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
ok
I0327 19:47:21.955928 785442 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I0327 19:47:21.956405 785442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0/freezer.state
I0327 19:47:21.956021 785442 out.go:177] - Using image ghcr.io/helm/tiller:v2.17.0
I0327 19:47:21.956410 785442 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
I0327 19:47:21.960187 785442 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0327 19:47:21.958482 785442 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I0327 19:47:21.959438 785442 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
ok
I0327 19:47:21.960662 785442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/787016/cgroup
I0327 19:47:21.960758 785442 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0327 19:47:21.961686 785442 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
I0327 19:47:21.961797 785442 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
I0327 19:47:21.962126 785442 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0327 19:47:21.963737 785442 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
I0327 19:47:21.963979 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I0327 19:47:21.963997 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0327 19:47:21.964008 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
I0327 19:47:21.964077 785442 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0327 19:47:21.967285 785442 out.go:177] - Using image docker.io/registry:2.8.3
I0327 19:47:21.966048 785442 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
I0327 19:47:21.966074 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
I0327 19:47:21.966200 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1979897496 /etc/kubernetes/addons/ig-role.yaml
I0327 19:47:21.966245 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0327 19:47:21.966823 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube176135027 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0327 19:47:21.966870 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3412833599 /etc/kubernetes/addons/yakd-crb.yaml
I0327 19:47:21.969752 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2231007622 /etc/kubernetes/addons/helm-tiller-dp.yaml
I0327 19:47:21.969908 785442 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I0327 19:47:21.969975 785442 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
I0327 19:47:21.970116 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2381010362 /etc/kubernetes/addons/metrics-apiservice.yaml
I0327 19:47:21.973376 785442 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
I0327 19:47:21.971762 785442 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I0327 19:47:21.971817 785442 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
I0327 19:47:21.972242 785442 api_server.go:204] freezer state: "THAWED"
I0327 19:47:21.972498 785442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0327 19:47:21.974861 785442 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
I0327 19:47:21.974893 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
I0327 19:47:21.975029 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4023595453 /etc/kubernetes/addons/registry-rc.yaml
I0327 19:47:21.975352 785442 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
I0327 19:47:21.977427 785442 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I0327 19:47:21.979045 785442 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0327 19:47:21.979081 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0327 19:47:21.979221 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube690712749 /etc/kubernetes/addons/rbac-external-attacher.yaml
I0327 19:47:21.984153 785442 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
ok
I0327 19:47:21.984188 785442 host.go:66] Checking if "minikube" exists ...
I0327 19:47:21.988661 785442 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0327 19:47:21.988699 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0327 19:47:21.988852 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube622246989 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0327 19:47:21.989429 785442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/787016/cgroup
I0327 19:47:21.990408 785442 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
I0327 19:47:21.990444 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
I0327 19:47:21.990755 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3615865372 /etc/kubernetes/addons/helm-tiller-rbac.yaml
I0327 19:47:21.994299 785442 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I0327 19:47:21.994339 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I0327 19:47:21.994745 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2740868542 /etc/kubernetes/addons/rbac-hostpath.yaml
I0327 19:47:22.002100 785442 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
I0327 19:47:22.002127 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
I0327 19:47:22.002246 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2384368977 /etc/kubernetes/addons/ig-rolebinding.yaml
I0327 19:47:22.011696 785442 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
I0327 19:47:22.011754 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0327 19:47:22.012152 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube955068222 /etc/kubernetes/addons/registry-svc.yaml
I0327 19:47:22.012633 785442 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0"
I0327 19:47:22.012692 785442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0/freezer.state
I0327 19:47:22.021101 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0327 19:47:22.021291 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3341825910 /etc/kubernetes/addons/storage-provisioner.yaml
I0327 19:47:22.025403 785442 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
I0327 19:47:22.025449 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I0327 19:47:22.025581 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2979057734 /etc/kubernetes/addons/yakd-svc.yaml
I0327 19:47:22.027691 785442 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
I0327 19:47:22.027730 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
I0327 19:47:22.027883 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3370427214 /etc/kubernetes/addons/helm-tiller-svc.yaml
I0327 19:47:22.035598 785442 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0327 19:47:22.035742 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0327 19:47:22.035945 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2367739608 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0327 19:47:22.036852 785442 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
I0327 19:47:22.036879 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
I0327 19:47:22.036985 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4106039266 /etc/kubernetes/addons/ig-clusterrole.yaml
I0327 19:47:22.037188 785442 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0327 19:47:22.037232 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0327 19:47:22.037370 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube216792825 /etc/kubernetes/addons/metrics-server-deployment.yaml
I0327 19:47:22.040511 785442 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0"
I0327 19:47:22.040578 785442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0/freezer.state
I0327 19:47:22.045417 785442 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
I0327 19:47:22.045450 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I0327 19:47:22.045651 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3087722302 /etc/kubernetes/addons/registry-proxy.yaml
I0327 19:47:22.053702 785442 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0327 19:47:22.053738 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0327 19:47:22.055454 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube328119600 /etc/kubernetes/addons/metrics-server-rbac.yaml
I0327 19:47:22.055959 785442 api_server.go:204] freezer state: "THAWED"
I0327 19:47:22.055986 785442 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
I0327 19:47:22.060443 785442 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
ok
I0327 19:47:22.068523 785442 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I0327 19:47:22.063780 785442 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0327 19:47:22.079272 785442 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0327 19:47:22.080126 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I0327 19:47:22.083864 785442 out.go:177] - Using image docker.io/busybox:stable
I0327 19:47:22.080926 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I0327 19:47:22.081000 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1088940883 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0327 19:47:22.081006 785442 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0327 19:47:22.084156 785442 api_server.go:204] freezer state: "THAWED"
I0327 19:47:22.084257 785442 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
I0327 19:47:22.086049 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I0327 19:47:22.086146 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
I0327 19:47:22.086286 785442 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
I0327 19:47:22.086347 785442 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0327 19:47:22.086368 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I0327 19:47:22.086582 785442 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0327 19:47:22.086619 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0327 19:47:22.087095 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2817974947 /etc/kubernetes/addons/yakd-dp.yaml
I0327 19:47:22.087343 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2638489851 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0327 19:47:22.087553 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3466187947 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0327 19:47:22.087737 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1871747012 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0327 19:47:22.088229 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube72790727 /etc/kubernetes/addons/metrics-server-service.yaml
I0327 19:47:22.093931 785442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0327 19:47:22.095237 785442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0327 19:47:22.097844 785442 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
ok
I0327 19:47:22.097953 785442 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
I0327 19:47:22.097972 785442 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
I0327 19:47:22.097979 785442 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
I0327 19:47:22.098016 785442 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
I0327 19:47:22.127832 785442 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0327 19:47:22.127882 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I0327 19:47:22.128012 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1571608221 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0327 19:47:22.149668 785442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0327 19:47:22.154758 785442 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0327 19:47:22.154800 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I0327 19:47:22.154970 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3882036622 /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0327 19:47:22.165235 785442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
I0327 19:47:22.168283 785442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0327 19:47:22.169796 785442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I0327 19:47:22.172197 785442 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
I0327 19:47:22.172233 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
I0327 19:47:22.172357 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube177075574 /etc/kubernetes/addons/ig-crd.yaml
I0327 19:47:22.180919 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0327 19:47:22.181135 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2154554086 /etc/kubernetes/addons/storageclass.yaml
I0327 19:47:22.200315 785442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0327 19:47:22.221714 785442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0327 19:47:22.224620 785442 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0327 19:47:22.224651 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I0327 19:47:22.224769 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1972881721 /etc/kubernetes/addons/rbac-external-resizer.yaml
I0327 19:47:22.243948 785442 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
I0327 19:47:22.244000 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
I0327 19:47:22.244150 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3090195628 /etc/kubernetes/addons/ig-daemonset.yaml
I0327 19:47:22.305720 785442 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0327 19:47:22.305767 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I0327 19:47:22.305948 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3317386515 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0327 19:47:22.317674 785442 exec_runner.go:51] Run: sudo systemctl start kubelet
I0327 19:47:22.327960 785442 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0327 19:47:22.328006 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I0327 19:47:22.328176 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1388280319 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0327 19:47:22.347049 785442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
I0327 19:47:22.355068 785442 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-15" to be "Ready" ...
I0327 19:47:22.359014 785442 node_ready.go:49] node "ubuntu-20-agent-15" has status "Ready":"True"
I0327 19:47:22.359044 785442 node_ready.go:38] duration metric: took 3.937317ms for node "ubuntu-20-agent-15" to be "Ready" ...
I0327 19:47:22.359056 785442 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0327 19:47:22.379141 785442 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-9hd8k" in "kube-system" namespace to be "Ready" ...
I0327 19:47:22.385381 785442 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0327 19:47:22.385419 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I0327 19:47:22.385542 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2768765113 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0327 19:47:22.437617 785442 start.go:948] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
I0327 19:47:22.496389 785442 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0327 19:47:22.498207 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I0327 19:47:22.499274 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1577108307 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0327 19:47:22.604378 785442 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0327 19:47:22.604447 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I0327 19:47:22.604611 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2203491235 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0327 19:47:22.648858 785442 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0327 19:47:22.649040 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0327 19:47:22.649229 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3492237834 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0327 19:47:22.867149 785442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0327 19:47:22.982202 785442 kapi.go:248] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
I0327 19:47:22.988520 785442 addons.go:470] Verifying addon registry=true in "minikube"
I0327 19:47:22.991474 785442 out.go:177] * Verifying registry addon...
I0327 19:47:22.994085 785442 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0327 19:47:23.004990 785442 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0327 19:47:23.005019 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0327 19:47:23.296803 785442 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.147081653s)
I0327 19:47:23.296843 785442 addons.go:470] Verifying addon metrics-server=true in "minikube"
I0327 19:47:23.349992 785442 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.256006951s)
I0327 19:47:23.401666 785442 pod_ready.go:92] pod "coredns-76f75df574-9hd8k" in "kube-system" namespace has status "Ready":"True"
I0327 19:47:23.401697 785442 pod_ready.go:81] duration metric: took 1.022523886s for pod "coredns-76f75df574-9hd8k" in "kube-system" namespace to be "Ready" ...
I0327 19:47:23.401712 785442 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-z26gp" in "kube-system" namespace to be "Ready" ...
I0327 19:47:23.410878 785442 pod_ready.go:92] pod "coredns-76f75df574-z26gp" in "kube-system" namespace has status "Ready":"True"
I0327 19:47:23.410906 785442 pod_ready.go:81] duration metric: took 9.184406ms for pod "coredns-76f75df574-z26gp" in "kube-system" namespace to be "Ready" ...
I0327 19:47:23.410920 785442 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-15" in "kube-system" namespace to be "Ready" ...
I0327 19:47:23.414129 785442 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (1.248841198s)
I0327 19:47:23.416861 785442 pod_ready.go:92] pod "etcd-ubuntu-20-agent-15" in "kube-system" namespace has status "Ready":"True"
I0327 19:47:23.416891 785442 pod_ready.go:81] duration metric: took 5.960565ms for pod "etcd-ubuntu-20-agent-15" in "kube-system" namespace to be "Ready" ...
I0327 19:47:23.416905 785442 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-15" in "kube-system" namespace to be "Ready" ...
I0327 19:47:23.427642 785442 pod_ready.go:92] pod "kube-apiserver-ubuntu-20-agent-15" in "kube-system" namespace has status "Ready":"True"
I0327 19:47:23.427677 785442 pod_ready.go:81] duration metric: took 10.762359ms for pod "kube-apiserver-ubuntu-20-agent-15" in "kube-system" namespace to be "Ready" ...
I0327 19:47:23.427694 785442 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-15" in "kube-system" namespace to be "Ready" ...
I0327 19:47:23.510720 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0327 19:47:23.547165 785442 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.377314384s)
I0327 19:47:23.550866 785442 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube service yakd-dashboard -n yakd-dashboard
I0327 19:47:23.574423 785442 pod_ready.go:92] pod "kube-controller-manager-ubuntu-20-agent-15" in "kube-system" namespace has status "Ready":"True"
I0327 19:47:23.574446 785442 pod_ready.go:81] duration metric: took 146.743815ms for pod "kube-controller-manager-ubuntu-20-agent-15" in "kube-system" namespace to be "Ready" ...
I0327 19:47:23.574457 785442 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zj2pl" in "kube-system" namespace to be "Ready" ...
I0327 19:47:23.575392 785442 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.407054783s)
I0327 19:47:23.776180 785442 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.42906421s)
I0327 19:47:23.905675 785442 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.705287333s)
W0327 19:47:23.905725 785442 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0327 19:47:23.905756 785442 retry.go:31] will retry after 254.531798ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0327 19:47:23.960987 785442 pod_ready.go:92] pod "kube-proxy-zj2pl" in "kube-system" namespace has status "Ready":"True"
I0327 19:47:23.961023 785442 pod_ready.go:81] duration metric: took 386.55804ms for pod "kube-proxy-zj2pl" in "kube-system" namespace to be "Ready" ...
I0327 19:47:23.961037 785442 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-15" in "kube-system" namespace to be "Ready" ...
I0327 19:47:23.999104 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0327 19:47:24.161396 785442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0327 19:47:24.358940 785442 pod_ready.go:92] pod "kube-scheduler-ubuntu-20-agent-15" in "kube-system" namespace has status "Ready":"True"
I0327 19:47:24.358974 785442 pod_ready.go:81] duration metric: took 397.92767ms for pod "kube-scheduler-ubuntu-20-agent-15" in "kube-system" namespace to be "Ready" ...
I0327 19:47:24.358989 785442 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-dvfbr" in "kube-system" namespace to be "Ready" ...
I0327 19:47:24.499250 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0327 19:47:24.991920 785442 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.124718235s)
I0327 19:47:24.991954 785442 addons.go:470] Verifying addon csi-hostpath-driver=true in "minikube"
I0327 19:47:24.993720 785442 out.go:177] * Verifying csi-hostpath-driver addon...
I0327 19:47:24.996862 785442 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0327 19:47:25.000362 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0327 19:47:25.002176 785442 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0327 19:47:25.002199 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:25.498533 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0327 19:47:25.501801 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:25.998945 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0327 19:47:26.002911 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:26.366380 785442 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-dvfbr" in "kube-system" namespace has status "Ready":"False"
I0327 19:47:26.499816 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0327 19:47:26.503146 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:26.941880 785442 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.780402496s)
I0327 19:47:26.999994 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0327 19:47:27.002112 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:27.500049 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0327 19:47:27.502072 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:27.998741 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0327 19:47:28.002320 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:28.498926 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0327 19:47:28.502011 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:28.883652 785442 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0327 19:47:28.883825 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2740077225 /var/lib/minikube/google_application_credentials.json
I0327 19:47:28.886031 785442 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-dvfbr" in "kube-system" namespace has status "Ready":"False"
I0327 19:47:28.930154 785442 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0327 19:47:28.930315 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4292447473 /var/lib/minikube/google_cloud_project
I0327 19:47:28.942321 785442 addons.go:234] Setting addon gcp-auth=true in "minikube"
I0327 19:47:28.942387 785442 host.go:66] Checking if "minikube" exists ...
I0327 19:47:28.942924 785442 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
I0327 19:47:28.942945 785442 api_server.go:166] Checking apiserver status ...
I0327 19:47:28.942981 785442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0327 19:47:28.964018 785442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/787016/cgroup
I0327 19:47:28.979942 785442 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0"
I0327 19:47:28.980036 785442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod0cffbf5c78c27b0243222aa0fdd9aab4/d580fffa011f993a6db64beec2513de40e38918c9875ec5a10c1704a9a56d7d0/freezer.state
I0327 19:47:28.991793 785442 api_server.go:204] freezer state: "THAWED"
I0327 19:47:28.991833 785442 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
I0327 19:47:28.996386 785442 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
ok
I0327 19:47:28.996471 785442 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
I0327 19:47:29.002091 785442 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
I0327 19:47:28.999564 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0327 19:47:29.001560 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:29.003707 785442 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
I0327 19:47:29.005378 785442 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0327 19:47:29.005427 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0327 19:47:29.005619 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3399691076 /etc/kubernetes/addons/gcp-auth-ns.yaml
I0327 19:47:29.016589 785442 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0327 19:47:29.016623 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0327 19:47:29.016723 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube459639426 /etc/kubernetes/addons/gcp-auth-service.yaml
I0327 19:47:29.026691 785442 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0327 19:47:29.026723 785442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I0327 19:47:29.026825 785442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube74684384 /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0327 19:47:29.035885 785442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0327 19:47:29.454802 785442 addons.go:470] Verifying addon gcp-auth=true in "minikube"
I0327 19:47:29.457748 785442 out.go:177] * Verifying gcp-auth addon...
I0327 19:47:29.461272 785442 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0327 19:47:29.464709 785442 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0327 19:47:29.464732 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:29.499972 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0327 19:47:29.503226 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:29.865919 785442 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-dvfbr" in "kube-system" namespace has status "Ready":"True"
I0327 19:47:29.865947 785442 pod_ready.go:81] duration metric: took 5.506949661s for pod "nvidia-device-plugin-daemonset-dvfbr" in "kube-system" namespace to be "Ready" ...
I0327 19:47:29.865960 785442 pod_ready.go:38] duration metric: took 7.506891255s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0327 19:47:29.865984 785442 api_server.go:52] waiting for apiserver process to appear ...
I0327 19:47:29.866070 785442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0327 19:47:29.884958 785442 api_server.go:72] duration metric: took 8.058273533s to wait for apiserver process to appear ...
I0327 19:47:29.884989 785442 api_server.go:88] waiting for apiserver healthz status ...
I0327 19:47:29.885014 785442 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
I0327 19:47:29.889485 785442 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
ok
I0327 19:47:29.890860 785442 api_server.go:141] control plane version: v1.29.3
I0327 19:47:29.890889 785442 api_server.go:131] duration metric: took 5.891243ms to wait for apiserver health ...
I0327 19:47:29.890901 785442 system_pods.go:43] waiting for kube-system pods to appear ...
I0327 19:47:29.900804 785442 system_pods.go:59] 18 kube-system pods found
I0327 19:47:29.900842 785442 system_pods.go:61] "coredns-76f75df574-9hd8k" [a4783215-45d9-4bd8-8362-a4a8c6c24223] Running
I0327 19:47:29.900849 785442 system_pods.go:61] "coredns-76f75df574-z26gp" [60b43498-08a2-4e5e-a8f9-7828b65d047f] Running
I0327 19:47:29.900856 785442 system_pods.go:61] "csi-hostpath-attacher-0" [df2fab58-2a5b-4139-b167-ce8300067ee0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0327 19:47:29.900865 785442 system_pods.go:61] "csi-hostpath-resizer-0" [7f0e4f91-6759-411a-b014-114732a72381] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0327 19:47:29.900875 785442 system_pods.go:61] "csi-hostpathplugin-gwdj5" [29cdfc20-973f-4a21-bc62-db14b8c63eec] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0327 19:47:29.900881 785442 system_pods.go:61] "etcd-ubuntu-20-agent-15" [34f7260d-c13b-43f9-a357-e40ba7a0b538] Running
I0327 19:47:29.900891 785442 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-15" [83ecd64c-552f-47c9-994d-0d6e0fd4aff8] Running
I0327 19:47:29.900897 785442 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-15" [d14c2b5d-fe8c-4bb0-8ee6-090e940b87f5] Running
I0327 19:47:29.900905 785442 system_pods.go:61] "kube-proxy-zj2pl" [7f4fd90b-fe59-4d82-bc93-6bf1e1f61698] Running
I0327 19:47:29.900911 785442 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-15" [c6819fb1-7b50-454c-a8fc-911139e455a1] Running
I0327 19:47:29.900923 785442 system_pods.go:61] "metrics-server-69cf46c98-99lnl" [6d4266fb-20c3-437e-b8c3-33bc953b1539] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0327 19:47:29.900929 785442 system_pods.go:61] "nvidia-device-plugin-daemonset-dvfbr" [3a81a4f4-da07-4e16-bad5-9c7c5139b5ab] Running
I0327 19:47:29.900934 785442 system_pods.go:61] "registry-2hmfs" [7e30047c-df90-44cb-b9a2-98b6574dd90f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0327 19:47:29.900968 785442 system_pods.go:61] "registry-proxy-z78qc" [afce7356-364e-4145-824f-b686975f47b9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0327 19:47:29.900985 785442 system_pods.go:61] "snapshot-controller-58dbcc7b99-d8hzj" [eab096b6-514d-48e3-aed2-f1dfecf4ff99] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0327 19:47:29.901002 785442 system_pods.go:61] "snapshot-controller-58dbcc7b99-njnrq" [7bad25f0-ddd1-4b97-8155-381a3c964b66] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0327 19:47:29.901012 785442 system_pods.go:61] "storage-provisioner" [20e18899-eefb-4036-ac1e-6522ce4203cf] Running
I0327 19:47:29.901020 785442 system_pods.go:61] "tiller-deploy-7b677967b9-7gsf8" [c4e50e3b-2e4a-4dee-aa77-fcb4e8acd261] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
I0327 19:47:29.901031 785442 system_pods.go:74] duration metric: took 10.124352ms to wait for pod list to return data ...
I0327 19:47:29.901045 785442 default_sa.go:34] waiting for default service account to be created ...
I0327 19:47:29.903710 785442 default_sa.go:45] found service account: "default"
I0327 19:47:29.903739 785442 default_sa.go:55] duration metric: took 2.68335ms for default service account to be created ...
I0327 19:47:29.903750 785442 system_pods.go:116] waiting for k8s-apps to be running ...
I0327 19:47:29.914072 785442 system_pods.go:86] 18 kube-system pods found
I0327 19:47:29.914114 785442 system_pods.go:89] "coredns-76f75df574-9hd8k" [a4783215-45d9-4bd8-8362-a4a8c6c24223] Running
I0327 19:47:29.914123 785442 system_pods.go:89] "coredns-76f75df574-z26gp" [60b43498-08a2-4e5e-a8f9-7828b65d047f] Running
I0327 19:47:29.914135 785442 system_pods.go:89] "csi-hostpath-attacher-0" [df2fab58-2a5b-4139-b167-ce8300067ee0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0327 19:47:29.914145 785442 system_pods.go:89] "csi-hostpath-resizer-0" [7f0e4f91-6759-411a-b014-114732a72381] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0327 19:47:29.914170 785442 system_pods.go:89] "csi-hostpathplugin-gwdj5" [29cdfc20-973f-4a21-bc62-db14b8c63eec] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0327 19:47:29.914178 785442 system_pods.go:89] "etcd-ubuntu-20-agent-15" [34f7260d-c13b-43f9-a357-e40ba7a0b538] Running
I0327 19:47:29.914186 785442 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-15" [83ecd64c-552f-47c9-994d-0d6e0fd4aff8] Running
I0327 19:47:29.914194 785442 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-15" [d14c2b5d-fe8c-4bb0-8ee6-090e940b87f5] Running
I0327 19:47:29.914200 785442 system_pods.go:89] "kube-proxy-zj2pl" [7f4fd90b-fe59-4d82-bc93-6bf1e1f61698] Running
I0327 19:47:29.914206 785442 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-15" [c6819fb1-7b50-454c-a8fc-911139e455a1] Running
I0327 19:47:29.914216 785442 system_pods.go:89] "metrics-server-69cf46c98-99lnl" [6d4266fb-20c3-437e-b8c3-33bc953b1539] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0327 19:47:29.914233 785442 system_pods.go:89] "nvidia-device-plugin-daemonset-dvfbr" [3a81a4f4-da07-4e16-bad5-9c7c5139b5ab] Running
I0327 19:47:29.914242 785442 system_pods.go:89] "registry-2hmfs" [7e30047c-df90-44cb-b9a2-98b6574dd90f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0327 19:47:29.914251 785442 system_pods.go:89] "registry-proxy-z78qc" [afce7356-364e-4145-824f-b686975f47b9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0327 19:47:29.914346 785442 system_pods.go:89] "snapshot-controller-58dbcc7b99-d8hzj" [eab096b6-514d-48e3-aed2-f1dfecf4ff99] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0327 19:47:29.914405 785442 system_pods.go:89] "snapshot-controller-58dbcc7b99-njnrq" [7bad25f0-ddd1-4b97-8155-381a3c964b66] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0327 19:47:29.914452 785442 system_pods.go:89] "storage-provisioner" [20e18899-eefb-4036-ac1e-6522ce4203cf] Running
I0327 19:47:29.914473 785442 system_pods.go:89] "tiller-deploy-7b677967b9-7gsf8" [c4e50e3b-2e4a-4dee-aa77-fcb4e8acd261] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
I0327 19:47:29.914490 785442 system_pods.go:126] duration metric: took 10.731802ms to wait for k8s-apps to be running ...
I0327 19:47:29.914512 785442 system_svc.go:44] waiting for kubelet service to be running ....
I0327 19:47:29.914567 785442 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I0327 19:47:29.942282 785442 system_svc.go:56] duration metric: took 27.755687ms WaitForService to wait for kubelet
I0327 19:47:29.942325 785442 kubeadm.go:576] duration metric: took 8.115647513s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0327 19:47:29.942354 785442 node_conditions.go:102] verifying NodePressure condition ...
I0327 19:47:29.959218 785442 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I0327 19:47:29.959304 785442 node_conditions.go:123] node cpu capacity is 8
I0327 19:47:29.959327 785442 node_conditions.go:105] duration metric: took 16.965531ms to run NodePressure ...
I0327 19:47:29.959339 785442 start.go:240] waiting for startup goroutines ...
I0327 19:47:29.965407 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:30.000248 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0327 19:47:30.003457 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:30.465976 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:30.499712 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0327 19:47:30.503073 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:30.965781 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:30.999553 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0327 19:47:31.003252 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:31.464342 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:31.500027 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0327 19:47:31.502057 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:31.965017 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:32.000149 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0327 19:47:32.003855 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:32.465127 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:32.500585 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0327 19:47:32.502423 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:32.984085 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:32.999821 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0327 19:47:33.003753 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:33.465160 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:33.501957 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0327 19:47:33.502669 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:33.965799 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:34.000233 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0327 19:47:34.002577 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:34.465952 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:34.500011 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0327 19:47:34.502439 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:34.965003 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:34.999680 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0327 19:47:35.002996 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:35.466026 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:35.499724 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0327 19:47:35.503091 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:35.965421 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:35.999921 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0327 19:47:36.001788 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:36.465795 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:36.499400 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0327 19:47:36.502597 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:36.965306 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:36.999817 785442 kapi.go:107] duration metric: took 14.005731704s to wait for kubernetes.io/minikube-addons=registry ...
I0327 19:47:37.002356 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:37.465544 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:37.502782 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:37.965762 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:38.002699 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:38.465250 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:38.503880 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:38.965638 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:39.002900 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:39.465960 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:39.503258 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:39.965502 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:40.003531 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:40.466095 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:40.503329 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:40.965218 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:41.003263 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:41.464507 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:41.503245 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:41.965204 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:42.002557 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:42.465781 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:42.502808 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:42.965804 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:43.003271 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:43.465528 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:43.503224 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:43.966393 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:44.002422 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:44.465657 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:44.502362 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:44.964949 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:45.002610 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:45.465932 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:45.502730 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:45.965826 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:46.042758 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:46.465380 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:46.502024 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:46.985346 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:47.002425 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:47.465161 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:47.503315 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:47.965294 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:48.001379 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:48.465964 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:48.503527 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:48.965071 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:49.003131 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:49.465603 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:49.502630 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:49.965149 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:50.002398 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:50.465413 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:50.502976 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:50.965274 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:51.006524 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:51.465729 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:51.502588 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:51.966434 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:52.003241 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:52.465100 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:52.503476 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:52.965413 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:53.002788 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:53.465387 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:53.502160 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:53.965722 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:54.002888 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:54.464765 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:54.503482 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:54.965083 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:55.002875 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0327 19:47:55.465013 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:55.504953 785442 kapi.go:107] duration metric: took 30.508088364s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0327 19:47:55.965406 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:56.464818 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:56.964808 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:57.465334 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:57.965012 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:58.465596 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:58.965238 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:59.465171 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:47:59.964535 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:48:00.464946 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:48:00.965388 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:48:01.464840 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:48:01.965898 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:48:02.465774 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:48:02.965113 785442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0327 19:48:03.465471 785442 kapi.go:107] duration metric: took 34.004198546s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0327 19:48:03.467537 785442 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
I0327 19:48:03.469094 785442 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0327 19:48:03.470521 785442 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0327 19:48:03.472278 785442 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, default-storageclass, metrics-server, storage-provisioner, helm-tiller, yakd, storage-provisioner-rancher, inspektor-gadget, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
I0327 19:48:03.474121 785442 addons.go:505] duration metric: took 41.684345361s for enable addons: enabled=[nvidia-device-plugin cloud-spanner default-storageclass metrics-server storage-provisioner helm-tiller yakd storage-provisioner-rancher inspektor-gadget volumesnapshots registry csi-hostpath-driver gcp-auth]
I0327 19:48:03.474176 785442 start.go:245] waiting for cluster config update ...
I0327 19:48:03.474204 785442 start.go:254] writing updated cluster config ...
I0327 19:48:03.474467 785442 exec_runner.go:51] Run: rm -f paused
I0327 19:48:03.521304 785442 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
I0327 19:48:03.523261 785442 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
==> Docker <==
-- Logs begin at Thu 2024-02-29 08:28:27 UTC, end at Wed 2024-03-27 19:51:28 UTC. --
Mar 27 19:47:53 ubuntu-20-agent-15 dockerd[785665]: time="2024-03-27T19:47:53.634640080Z" level=warning msg="reference for unknown type: " digest="sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f" remote="registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f" spanID=e703d01138cf4bce traceID=2d8c6955bfdb19adeac46ceeaeaddcee
Mar 27 19:47:54 ubuntu-20-agent-15 cri-dockerd[785942]: time="2024-03-27T19:47:54Z" level=info msg="Stop pulling image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f: Status: Downloaded newer image for registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f"
Mar 27 19:48:01 ubuntu-20-agent-15 cri-dockerd[785942]: time="2024-03-27T19:48:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7e6cdc336421fd4b34195732f1a8c8fc9cce9cf94cb1da1555840271e9f27f53/resolv.conf as [nameserver 10.96.0.10 search gcp-auth.svc.cluster.local svc.cluster.local cluster.local us-central1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
Mar 27 19:48:01 ubuntu-20-agent-15 dockerd[785665]: time="2024-03-27T19:48:01.716663193Z" level=warning msg="reference for unknown type: " digest="sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb" spanID=c19e2df25da3743a traceID=3b48c0a7ad70ec54b0fe3a3bb7c26e23
Mar 27 19:48:02 ubuntu-20-agent-15 cri-dockerd[785942]: time="2024-03-27T19:48:02Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb"
Mar 27 19:48:07 ubuntu-20-agent-15 cri-dockerd[785942]: time="2024-03-27T19:48:07Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0@sha256:c6db0381372939368364efb98d8c90f4b0e3d86a5637682b85a01195937d9eff: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c6db0381372939368364efb98d8c90f4b0e3d86a5637682b85a01195937d9eff"
Mar 27 19:48:08 ubuntu-20-agent-15 dockerd[785665]: time="2024-03-27T19:48:08.814164791Z" level=info msg="ignoring event" container=77103e5616e629a5297ec4603e8f341512ea16af8f930b073addee727cd20ea9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 27 19:48:09 ubuntu-20-agent-15 dockerd[785665]: time="2024-03-27T19:48:09.241489344Z" level=error msg="Failed to compute size of container rootfs bc753d2c4025cad80aa3c14a881d85de106db74bdbfc6b59bead02bc9eb657ae: mount does not exist" spanID=ca5094ed40995b00 traceID=0860beeb79332bc55e305fb55d6ebe0d
Mar 27 19:48:09 ubuntu-20-agent-15 cri-dockerd[785942]: time="2024-03-27T19:48:09Z" level=error msg="Error response from daemon: No such container: bc753d2c4025cad80aa3c14a881d85de106db74bdbfc6b59bead02bc9eb657ae Failed to get stats from container bc753d2c4025cad80aa3c14a881d85de106db74bdbfc6b59bead02bc9eb657ae"
Mar 27 19:48:14 ubuntu-20-agent-15 cri-dockerd[785942]: time="2024-03-27T19:48:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e31ea2353b274ecb5c1df789ebe1bc17a867d981bb972889f2eb7de254a42938/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-central1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
Mar 27 19:48:14 ubuntu-20-agent-15 cri-dockerd[785942]: time="2024-03-27T19:48:14Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:latest: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest"
Mar 27 19:48:41 ubuntu-20-agent-15 cri-dockerd[785942]: time="2024-03-27T19:48:41Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0@sha256:c6db0381372939368364efb98d8c90f4b0e3d86a5637682b85a01195937d9eff: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c6db0381372939368364efb98d8c90f4b0e3d86a5637682b85a01195937d9eff"
Mar 27 19:48:42 ubuntu-20-agent-15 dockerd[785665]: time="2024-03-27T19:48:42.708105685Z" level=error msg="stream copy error: reading from a closed fifo"
Mar 27 19:48:42 ubuntu-20-agent-15 dockerd[785665]: time="2024-03-27T19:48:42.708105787Z" level=error msg="stream copy error: reading from a closed fifo"
Mar 27 19:48:42 ubuntu-20-agent-15 dockerd[785665]: time="2024-03-27T19:48:42.710301308Z" level=error msg="Error running exec 7eb62454195deede3939c2e499360a60b8bda066cd8548b4ffd8b0c52cfafa90 in container: OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown" spanID=fbe1b001366d1e6a traceID=906081f0ead8899dc1832d7daaab7043
Mar 27 19:48:42 ubuntu-20-agent-15 dockerd[785665]: time="2024-03-27T19:48:42.894276490Z" level=info msg="ignoring event" container=4c5f888063aba4bf2b4443b99505e49421712c2bec31f8a110e6ce0531233fad module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 27 19:48:44 ubuntu-20-agent-15 dockerd[785665]: time="2024-03-27T19:48:44.816535808Z" level=info msg="ignoring event" container=20cda7b4d3cec7047690a051c164b65830c7141e58051470fb4e5b586e6590ef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 27 19:48:44 ubuntu-20-agent-15 dockerd[785665]: time="2024-03-27T19:48:44.825968337Z" level=warning msg="failed to close stdin: task 20cda7b4d3cec7047690a051c164b65830c7141e58051470fb4e5b586e6590ef not found: not found"
Mar 27 19:48:46 ubuntu-20-agent-15 dockerd[785665]: time="2024-03-27T19:48:46.864341409Z" level=info msg="ignoring event" container=e31ea2353b274ecb5c1df789ebe1bc17a867d981bb972889f2eb7de254a42938 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 27 19:49:23 ubuntu-20-agent-15 cri-dockerd[785942]: time="2024-03-27T19:49:23Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0@sha256:c6db0381372939368364efb98d8c90f4b0e3d86a5637682b85a01195937d9eff: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c6db0381372939368364efb98d8c90f4b0e3d86a5637682b85a01195937d9eff"
Mar 27 19:49:24 ubuntu-20-agent-15 dockerd[785665]: time="2024-03-27T19:49:24.820864008Z" level=info msg="ignoring event" container=49013e16b1c71ee46e566e8f014699638841eeeed565099796e3f94f1b3bd308 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 27 19:50:48 ubuntu-20-agent-15 cri-dockerd[785942]: time="2024-03-27T19:50:48Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0@sha256:c6db0381372939368364efb98d8c90f4b0e3d86a5637682b85a01195937d9eff: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c6db0381372939368364efb98d8c90f4b0e3d86a5637682b85a01195937d9eff"
Mar 27 19:50:49 ubuntu-20-agent-15 dockerd[785665]: time="2024-03-27T19:50:49.834401828Z" level=info msg="ignoring event" container=e6b3e5d9b741ce574dd6db0c3dbf26a37fc208d66031963616733c1c2e071aa2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 27 19:51:27 ubuntu-20-agent-15 dockerd[785665]: time="2024-03-27T19:51:27.923245251Z" level=info msg="ignoring event" container=606866de2fb6b947a30e25cf44a8a27e2bbad24eb3e295d3944cae376520fd8e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 27 19:51:28 ubuntu-20-agent-15 dockerd[785665]: time="2024-03-27T19:51:28.050666359Z" level=info msg="ignoring event" container=943e6bd23bafc32f2a23d65ac3f717e3b702a8ca9d092f5a50bacd51b6d75545 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
e6b3e5d9b741c ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c6db0381372939368364efb98d8c90f4b0e3d86a5637682b85a01195937d9eff 40 seconds ago Exited gadget 5 e093f01a76c61 gadget-vpxgx
7dd753def982b gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb 3 minutes ago Running gcp-auth 0 7e6cdc336421f gcp-auth-7d69788767-fglgd
cf0e7f4d6f99e registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f 3 minutes ago Running csi-snapshotter 0 059b87204b9a0 csi-hostpathplugin-gwdj5
bf82fabdb6deb registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 3 minutes ago Running csi-provisioner 0 059b87204b9a0 csi-hostpathplugin-gwdj5
4856bc6275ab7 registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0 3 minutes ago Running liveness-probe 0 059b87204b9a0 csi-hostpathplugin-gwdj5
79dda4afae9b9 registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 3 minutes ago Running hostpath 0 059b87204b9a0 csi-hostpathplugin-gwdj5
1bf858930ae13 registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c 3 minutes ago Running node-driver-registrar 0 059b87204b9a0 csi-hostpathplugin-gwdj5
a5058a4a60781 registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c 3 minutes ago Running csi-external-health-monitor-controller 0 059b87204b9a0 csi-hostpathplugin-gwdj5
a61d2bf4ea597 registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 3 minutes ago Running csi-resizer 0 4c2a8faa4be44 csi-hostpath-resizer-0
191a173649496 registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b 3 minutes ago Running csi-attacher 0 e56c2d1c84c54 csi-hostpath-attacher-0
930ad351d2c9f registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 3 minutes ago Running volume-snapshot-controller 0 2c3b34cba55e5 snapshot-controller-58dbcc7b99-njnrq
e293012664f2f registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 3 minutes ago Running volume-snapshot-controller 0 e916d5c7ce86c snapshot-controller-58dbcc7b99-d8hzj
baa4c85c6221e marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310 3 minutes ago Running yakd 0 be2c04cad9a4a yakd-dashboard-9947fc6bf-bsvfh
405db64cb85cb rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246 3 minutes ago Running local-path-provisioner 0 eb0704d9bc5fc local-path-provisioner-78b46b4d5c-kfxq8
c6b840a620ed3 ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f 3 minutes ago Running tiller 0 76aaf3717d682 tiller-deploy-7b677967b9-7gsf8
f8d17fababe41 gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367 3 minutes ago Running registry-proxy 0 3615e2b78473b registry-proxy-z78qc
ba2eefde6b1f7 registry.k8s.io/metrics-server/metrics-server@sha256:1c0419326500f1704af580d12a579671b2c3a06a8aa918cd61d0a35fb2d6b3ce 3 minutes ago Running metrics-server 0 b21b49ee6a24f metrics-server-69cf46c98-99lnl
606866de2fb6b registry@sha256:fb9c9aef62af3955f6014613456551c92e88a67dcf1fc51f5f91bcbd1832813f 3 minutes ago Unknown registry 0 943e6bd23bafc registry-2hmfs
9875ce84b510a gcr.io/cloud-spanner-emulator/emulator@sha256:538fb31f832e76c93f10035cb609c56fc5cd18b3cd85a3ba50699572c3c5dc50 3 minutes ago Running cloud-spanner-emulator 0 d30f17a6fb992 cloud-spanner-emulator-5446596998-j5qwr
a54c4d62a3b16 nvcr.io/nvidia/k8s-device-plugin@sha256:50aa9517d771e3b0ffa7fded8f1e988dba680a7ff5efce162ce31d1b5ec043e2 4 minutes ago Running nvidia-device-plugin-ctr 0 9560d5b62bed9 nvidia-device-plugin-daemonset-dvfbr
7d6b506e168d1 6e38f40d628db 4 minutes ago Running storage-provisioner 0 5b6c4d5dd4c98 storage-provisioner
8a99b08e16c22 cbb01a7bd410d 4 minutes ago Running coredns 0 2faf66181e661 coredns-76f75df574-9hd8k
e56210d620d68 a1d263b5dc5b0 4 minutes ago Running kube-proxy 0 d3594f21d5f3a kube-proxy-zj2pl
5ed77f086ed62 6052a25da3f97 4 minutes ago Running kube-controller-manager 0 c5f09be0b4887 kube-controller-manager-ubuntu-20-agent-15
f7f6eba592ba1 3861cfcd7c04c 4 minutes ago Running etcd 0 6484f5fccf787 etcd-ubuntu-20-agent-15
5d7c377589897 8c390d98f50c0 4 minutes ago Running kube-scheduler 0 93922e3ed345f kube-scheduler-ubuntu-20-agent-15
d580fffa011f9 39f995c9f1996 4 minutes ago Running kube-apiserver 0 28698b144e0fd kube-apiserver-ubuntu-20-agent-15
==> coredns [8a99b08e16c2] <==
[ERROR] plugin/errors: 2 5413775293664718515.2396988378427081446. HINFO: read udp 10.244.0.4:56081->169.254.169.254:53: i/o timeout
[ERROR] plugin/errors: 2 5413775293664718515.2396988378427081446. HINFO: read udp 10.244.0.4:45622->169.254.169.254:53: i/o timeout
[INFO] Reloading
[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
[INFO] Reloading complete
[INFO] 127.0.0.1:42997 - 34889 "HINFO IN 7147265740188258331.2114921177029480365. udp 57 false 512" - - 0 6.001289416s
[ERROR] plugin/errors: 2 7147265740188258331.2114921177029480365. HINFO: read udp 10.244.0.4:38035->169.254.169.254:53: i/o timeout
[INFO] 127.0.0.1:38164 - 1255 "HINFO IN 7147265740188258331.2114921177029480365. udp 57 false 512" - - 0 6.002291992s
[ERROR] plugin/errors: 2 7147265740188258331.2114921177029480365. HINFO: read udp 10.244.0.4:51981->169.254.169.254:53: i/o timeout
[INFO] 127.0.0.1:34697 - 14013 "HINFO IN 7147265740188258331.2114921177029480365. udp 57 false 512" - - 0 4.001487388s
[ERROR] plugin/errors: 2 7147265740188258331.2114921177029480365. HINFO: read udp 10.244.0.4:60245->169.254.169.254:53: i/o timeout
[INFO] 127.0.0.1:33383 - 51807 "HINFO IN 7147265740188258331.2114921177029480365. udp 57 false 512" - - 0 2.000911871s
[ERROR] plugin/errors: 2 7147265740188258331.2114921177029480365. HINFO: read udp 10.244.0.4:47173->169.254.169.254:53: i/o timeout
[INFO] 127.0.0.1:33516 - 63347 "HINFO IN 7147265740188258331.2114921177029480365. udp 57 false 512" - - 0 2.000715458s
[ERROR] plugin/errors: 2 7147265740188258331.2114921177029480365. HINFO: read udp 10.244.0.4:33676->169.254.169.254:53: i/o timeout
[INFO] 127.0.0.1:56508 - 28845 "HINFO IN 7147265740188258331.2114921177029480365. udp 57 false 512" - - 0 2.000350312s
[ERROR] plugin/errors: 2 7147265740188258331.2114921177029480365. HINFO: read udp 10.244.0.4:60360->169.254.169.254:53: i/o timeout
[INFO] 127.0.0.1:40670 - 25520 "HINFO IN 7147265740188258331.2114921177029480365. udp 57 false 512" - - 0 2.00071576s
[ERROR] plugin/errors: 2 7147265740188258331.2114921177029480365. HINFO: read udp 10.244.0.4:55322->169.254.169.254:53: i/o timeout
[INFO] 127.0.0.1:58083 - 53156 "HINFO IN 7147265740188258331.2114921177029480365. udp 57 false 512" - - 0 2.000681164s
[ERROR] plugin/errors: 2 7147265740188258331.2114921177029480365. HINFO: read udp 10.244.0.4:50925->169.254.169.254:53: i/o timeout
[INFO] 127.0.0.1:36019 - 41800 "HINFO IN 7147265740188258331.2114921177029480365. udp 57 false 512" - - 0 2.000710297s
[ERROR] plugin/errors: 2 7147265740188258331.2114921177029480365. HINFO: read udp 10.244.0.4:50693->169.254.169.254:53: i/o timeout
[INFO] 127.0.0.1:36036 - 9426 "HINFO IN 7147265740188258331.2114921177029480365. udp 57 false 512" - - 0 2.001175497s
[ERROR] plugin/errors: 2 7147265740188258331.2114921177029480365. HINFO: read udp 10.244.0.4:57632->169.254.169.254:53: i/o timeout
==> describe nodes <==
Name: ubuntu-20-agent-15
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=ubuntu-20-agent-15
kubernetes.io/os=linux
minikube.k8s.io/commit=fd5228225874e763d59e7e8bf88a02e145755a81
minikube.k8s.io/name=minikube
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_03_27T19_47_08_0700
minikube.k8s.io/version=v1.33.0-beta.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=ubuntu-20-agent-15
Annotations: csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-15"}
kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 27 Mar 2024 19:47:05 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: ubuntu-20-agent-15
AcquireTime: <unset>
RenewTime: Wed, 27 Mar 2024 19:51:23 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 27 Mar 2024 19:48:41 +0000 Wed, 27 Mar 2024 19:47:05 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 27 Mar 2024 19:48:41 +0000 Wed, 27 Mar 2024 19:47:05 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 27 Mar 2024 19:48:41 +0000 Wed, 27 Mar 2024 19:47:05 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 27 Mar 2024 19:48:41 +0000 Wed, 27 Mar 2024 19:47:05 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled
Addresses:
InternalIP: 10.128.15.240
Hostname: ubuntu-20-agent-15
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859344Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859344Ki
pods: 110
System Info:
Machine ID: 591c9f1229383743e2bfc56a050d43d1
System UUID: b37db8a4-1476-dab1-7f0f-0d5cfb4ed197
Boot ID: 947a0fb0-1897-4d21-b854-0f0a395b1b8e
Kernel Version: 5.15.0-1054-gcp
OS Image: Ubuntu 20.04.6 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://26.0.0
Kubelet Version: v1.29.3
Kube-Proxy Version: v1.29.3
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (21 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default cloud-spanner-emulator-5446596998-j5qwr 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m6s
gadget gadget-vpxgx 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m5s
gcp-auth gcp-auth-7d69788767-fglgd 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m59s
kube-system coredns-76f75df574-9hd8k 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 4m7s
kube-system csi-hostpath-attacher-0 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m4s
kube-system csi-hostpath-resizer-0 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m4s
kube-system csi-hostpathplugin-gwdj5 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m4s
kube-system etcd-ubuntu-20-agent-15 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 4m20s
kube-system kube-apiserver-ubuntu-20-agent-15 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m20s
kube-system kube-controller-manager-ubuntu-20-agent-15 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m20s
kube-system kube-proxy-zj2pl 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m7s
kube-system kube-scheduler-ubuntu-20-agent-15 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m21s
kube-system metrics-server-69cf46c98-99lnl 100m (1%!)(MISSING) 0 (0%!)(MISSING) 200Mi (0%!)(MISSING) 0 (0%!)(MISSING) 4m5s
kube-system nvidia-device-plugin-daemonset-dvfbr 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m6s
kube-system registry-proxy-z78qc 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m6s
kube-system snapshot-controller-58dbcc7b99-d8hzj 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m5s
kube-system snapshot-controller-58dbcc7b99-njnrq 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m5s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m5s
kube-system tiller-deploy-7b677967b9-7gsf8 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m5s
local-path-storage local-path-provisioner-78b46b4d5c-kfxq8 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m5s
yakd-dashboard yakd-dashboard-9947fc6bf-bsvfh 0 (0%!)(MISSING) 0 (0%!)(MISSING) 128Mi (0%!)(MISSING) 256Mi (0%!)(MISSING) 4m5s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (10%!)(MISSING) 0 (0%!)(MISSING)
memory 498Mi (1%!)(MISSING) 426Mi (1%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m5s kube-proxy
Normal Starting 4m20s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 4m20s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 4m20s kubelet Node ubuntu-20-agent-15 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m20s kubelet Node ubuntu-20-agent-15 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m20s kubelet Node ubuntu-20-agent-15 status is now: NodeHasSufficientPID
Normal RegisteredNode 4m7s node-controller Node ubuntu-20-agent-15 event: Registered Node ubuntu-20-agent-15 in Controller
==> dmesg <==
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 44 49 e3 ed 41 08 06
[ +0.200060] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 9a 04 78 34 4e d2 08 06
[ +13.545955] IPv4: martian source 10.244.0.1 from 10.244.0.8, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 3e d7 a1 87 4f 09 08 06
[ +2.303509] IPv4: martian source 10.244.0.1 from 10.244.0.11, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 8a c3 43 36 21 24 08 06
[ +7.324932] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 da 12 b6 8d 71 08 06
[ +0.042238] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff fa a9 aa e2 5d 46 08 06
[ +3.972186] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 1b 52 bb 9e d0 08 06
[ +0.018071] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 5a 64 50 1e 3c 08 06
[ +1.970234] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a 4c fd 79 4f 17 08 06
[ +0.228317] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 96 7e 62 3c 09 0a 08 06
[ +0.745030] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 1b c3 96 35 e3 08 06
[Mar27 19:48] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
[ +0.000009] ll header: 00000000: ff ff ff ff ff ff be b1 1b 5a 71 f7 08 06
[ +11.874368] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 1f 65 d8 05 1f 08 06
==> etcd [f7f6eba592ba] <==
{"level":"info","ts":"2024-03-27T19:47:04.182361Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
{"level":"info","ts":"2024-03-27T19:47:04.182406Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
{"level":"info","ts":"2024-03-27T19:47:04.182417Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
{"level":"info","ts":"2024-03-27T19:47:04.182883Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"10.128.15.240:2380"}
{"level":"info","ts":"2024-03-27T19:47:04.182914Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"10.128.15.240:2380"}
{"level":"info","ts":"2024-03-27T19:47:04.183197Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"13f0e7e2a3d8cc98 switched to configuration voters=(1436903241728707736)"}
{"level":"info","ts":"2024-03-27T19:47:04.183277Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3112ce273fbe8262","local-member-id":"13f0e7e2a3d8cc98","added-peer-id":"13f0e7e2a3d8cc98","added-peer-peer-urls":["https://10.128.15.240:2380"]}
{"level":"info","ts":"2024-03-27T19:47:04.367204Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"13f0e7e2a3d8cc98 is starting a new election at term 1"}
{"level":"info","ts":"2024-03-27T19:47:04.367258Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"13f0e7e2a3d8cc98 became pre-candidate at term 1"}
{"level":"info","ts":"2024-03-27T19:47:04.36729Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"13f0e7e2a3d8cc98 received MsgPreVoteResp from 13f0e7e2a3d8cc98 at term 1"}
{"level":"info","ts":"2024-03-27T19:47:04.367305Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"13f0e7e2a3d8cc98 became candidate at term 2"}
{"level":"info","ts":"2024-03-27T19:47:04.367313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"13f0e7e2a3d8cc98 received MsgVoteResp from 13f0e7e2a3d8cc98 at term 2"}
{"level":"info","ts":"2024-03-27T19:47:04.367325Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"13f0e7e2a3d8cc98 became leader at term 2"}
{"level":"info","ts":"2024-03-27T19:47:04.36735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 13f0e7e2a3d8cc98 elected leader 13f0e7e2a3d8cc98 at term 2"}
{"level":"info","ts":"2024-03-27T19:47:04.368461Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"13f0e7e2a3d8cc98","local-member-attributes":"{Name:ubuntu-20-agent-15 ClientURLs:[https://10.128.15.240:2379]}","request-path":"/0/members/13f0e7e2a3d8cc98/attributes","cluster-id":"3112ce273fbe8262","publish-timeout":"7s"}
{"level":"info","ts":"2024-03-27T19:47:04.368516Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-03-27T19:47:04.368665Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-03-27T19:47:04.368644Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2024-03-27T19:47:04.36889Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-03-27T19:47:04.36891Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-03-27T19:47:04.369417Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3112ce273fbe8262","local-member-id":"13f0e7e2a3d8cc98","cluster-version":"3.5"}
{"level":"info","ts":"2024-03-27T19:47:04.369646Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-03-27T19:47:04.369674Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2024-03-27T19:47:04.37093Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.128.15.240:2379"}
{"level":"info","ts":"2024-03-27T19:47:04.37128Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
==> gcp-auth [7dd753def982] <==
2024/03/27 19:48:02 GCP Auth Webhook started!
2024/03/27 19:48:13 Ready to marshal response ...
2024/03/27 19:48:13 Ready to write response ...
2024/03/27 19:48:32 failed to get releases file: Get "https://storage.googleapis.com/minikube-gcp-auth/releases.json": dial tcp: lookup storage.googleapis.com: i/o timeout
==> kernel <==
19:51:29 up 3:33, 0 users, load average: 0.47, 1.04, 1.55
Linux ubuntu-20-agent-15 5.15.0-1054-gcp #62~20.04.1-Ubuntu SMP Wed Mar 13 20:29:29 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.6 LTS"
==> kube-apiserver [d580fffa011f] <==
I0327 19:47:23.746503 1 handler.go:275] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
I0327 19:47:23.792823 1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0327 19:47:23.792860 1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0327 19:47:23.831396 1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0327 19:47:23.831450 1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0327 19:47:23.859928 1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0327 19:47:23.859970 1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W0327 19:47:24.112540 1 handler_proxy.go:93] no RequestInfo found in the context
E0327 19:47:24.112661 1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0327 19:47:24.112673 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W0327 19:47:24.112540 1 handler_proxy.go:93] no RequestInfo found in the context
E0327 19:47:24.112717 1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
I0327 19:47:24.114770 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0327 19:47:24.902992 1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.104.67.148"}
I0327 19:47:24.911030 1 controller.go:624] quota admission added evaluator for: statefulsets.apps
I0327 19:47:24.966463 1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.99.124.148"}
I0327 19:47:29.367108 1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.99.179.137"}
I0327 19:47:29.397815 1 controller.go:624] quota admission added evaluator for: jobs.batch
W0327 19:47:34.530691 1 handler_proxy.go:93] no RequestInfo found in the context
E0327 19:47:34.530770 1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
E0327 19:47:34.531246 1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.60.156:443/apis/metrics.k8s.io/v1beta1: Get "https://10.104.60.156:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.104.60.156:443: connect: connection refused
E0327 19:47:34.532700 1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.60.156:443/apis/metrics.k8s.io/v1beta1: Get "https://10.104.60.156:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.104.60.156:443: connect: connection refused
I0327 19:47:34.571537 1 handler.go:275] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
==> kube-controller-manager [5ed77f086ed6] <==
I0327 19:47:52.222094 1 shared_informer.go:311] Waiting for caches to sync for garbage collector
I0327 19:47:52.322672 1 shared_informer.go:318] Caches are synced for garbage collector
I0327 19:47:52.909129 1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
I0327 19:47:52.948323 1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
I0327 19:47:53.067973 1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
I0327 19:47:53.075811 1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
I0327 19:47:53.080626 1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
I0327 19:47:53.080786 1 event.go:376] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
I0327 19:47:53.093060 1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
I0327 19:47:53.911053 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-9947fc6bf" duration="7.956289ms"
I0327 19:47:53.911192 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-9947fc6bf" duration="65.803µs"
I0327 19:47:53.915923 1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
I0327 19:47:53.923594 1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
I0327 19:47:53.928138 1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
I0327 19:47:53.928309 1 event.go:376] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
I0327 19:47:53.983237 1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
I0327 19:48:01.740543 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="6.036168ms"
I0327 19:48:01.740638 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="54.475µs"
I0327 19:48:03.136673 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-7d69788767" duration="5.84073ms"
I0327 19:48:03.136796 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-7d69788767" duration="66.019µs"
I0327 19:48:23.013624 1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
I0327 19:48:23.014077 1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
I0327 19:48:23.040682 1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
I0327 19:48:23.041924 1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
I0327 19:51:27.879611 1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="17.153µs"
==> kube-proxy [e56210d620d6] <==
I0327 19:47:22.933963 1 server_others.go:72] "Using iptables proxy"
I0327 19:47:23.001008 1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["10.128.15.240"]
I0327 19:47:23.062624 1 server.go:652] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0327 19:47:23.062662 1 server_others.go:168] "Using iptables Proxier"
I0327 19:47:23.070046 1 server_others.go:512] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
I0327 19:47:23.070070 1 server_others.go:529] "Defaulting to no-op detect-local"
I0327 19:47:23.070113 1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0327 19:47:23.070342 1 server.go:865] "Version info" version="v1.29.3"
I0327 19:47:23.070358 1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0327 19:47:23.072911 1 config.go:188] "Starting service config controller"
I0327 19:47:23.072928 1 shared_informer.go:311] Waiting for caches to sync for service config
I0327 19:47:23.072950 1 config.go:97] "Starting endpoint slice config controller"
I0327 19:47:23.072954 1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
I0327 19:47:23.073857 1 config.go:315] "Starting node config controller"
I0327 19:47:23.073876 1 shared_informer.go:311] Waiting for caches to sync for node config
I0327 19:47:23.173870 1 shared_informer.go:318] Caches are synced for endpoint slice config
I0327 19:47:23.173940 1 shared_informer.go:318] Caches are synced for service config
I0327 19:47:23.174283 1 shared_informer.go:318] Caches are synced for node config
==> kube-scheduler [5d7c37758989] <==
E0327 19:47:05.542321 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0327 19:47:05.542326 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0327 19:47:05.542277 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0327 19:47:05.542351 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0327 19:47:05.542339 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0327 19:47:05.542381 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W0327 19:47:06.386532 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0327 19:47:06.386589 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
W0327 19:47:06.402843 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0327 19:47:06.402881 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
W0327 19:47:06.415586 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0327 19:47:06.415623 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W0327 19:47:06.461511 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0327 19:47:06.461528 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0327 19:47:06.461553 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0327 19:47:06.461553 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
W0327 19:47:06.507026 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0327 19:47:06.507066 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
W0327 19:47:06.526394 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0327 19:47:06.526448 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
W0327 19:47:06.636562 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0327 19:47:06.636610 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0327 19:47:06.680281 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0327 19:47:06.680323 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
I0327 19:47:07.138639 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
-- Logs begin at Thu 2024-02-29 08:28:27 UTC, end at Wed 2024-03-27 19:51:29 UTC. --
Mar 27 19:50:10 ubuntu-20-agent-15 kubelet[787231]: I0327 19:50:10.746957 787231 scope.go:117] "RemoveContainer" containerID="49013e16b1c71ee46e566e8f014699638841eeeed565099796e3f94f1b3bd308"
Mar 27 19:50:10 ubuntu-20-agent-15 kubelet[787231]: E0327 19:50:10.747428 787231 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=gadget pod=gadget-vpxgx_gadget(43c5a10d-8c55-4d63-935b-1aaa886a793f)\"" pod="gadget/gadget-vpxgx" podUID="43c5a10d-8c55-4d63-935b-1aaa886a793f"
Mar 27 19:50:21 ubuntu-20-agent-15 kubelet[787231]: I0327 19:50:21.746662 787231 scope.go:117] "RemoveContainer" containerID="49013e16b1c71ee46e566e8f014699638841eeeed565099796e3f94f1b3bd308"
Mar 27 19:50:21 ubuntu-20-agent-15 kubelet[787231]: E0327 19:50:21.747110 787231 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=gadget pod=gadget-vpxgx_gadget(43c5a10d-8c55-4d63-935b-1aaa886a793f)\"" pod="gadget/gadget-vpxgx" podUID="43c5a10d-8c55-4d63-935b-1aaa886a793f"
Mar 27 19:50:34 ubuntu-20-agent-15 kubelet[787231]: I0327 19:50:34.746808 787231 scope.go:117] "RemoveContainer" containerID="49013e16b1c71ee46e566e8f014699638841eeeed565099796e3f94f1b3bd308"
Mar 27 19:50:34 ubuntu-20-agent-15 kubelet[787231]: E0327 19:50:34.747259 787231 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=gadget pod=gadget-vpxgx_gadget(43c5a10d-8c55-4d63-935b-1aaa886a793f)\"" pod="gadget/gadget-vpxgx" podUID="43c5a10d-8c55-4d63-935b-1aaa886a793f"
Mar 27 19:50:48 ubuntu-20-agent-15 kubelet[787231]: I0327 19:50:48.747583 787231 scope.go:117] "RemoveContainer" containerID="49013e16b1c71ee46e566e8f014699638841eeeed565099796e3f94f1b3bd308"
Mar 27 19:50:50 ubuntu-20-agent-15 kubelet[787231]: I0327 19:50:50.285686 787231 scope.go:117] "RemoveContainer" containerID="49013e16b1c71ee46e566e8f014699638841eeeed565099796e3f94f1b3bd308"
Mar 27 19:50:50 ubuntu-20-agent-15 kubelet[787231]: I0327 19:50:50.286150 787231 scope.go:117] "RemoveContainer" containerID="e6b3e5d9b741ce574dd6db0c3dbf26a37fc208d66031963616733c1c2e071aa2"
Mar 27 19:50:50 ubuntu-20-agent-15 kubelet[787231]: E0327 19:50:50.286929 787231 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-vpxgx_gadget(43c5a10d-8c55-4d63-935b-1aaa886a793f)\"" pod="gadget/gadget-vpxgx" podUID="43c5a10d-8c55-4d63-935b-1aaa886a793f"
Mar 27 19:50:51 ubuntu-20-agent-15 kubelet[787231]: I0327 19:50:51.307700 787231 scope.go:117] "RemoveContainer" containerID="e6b3e5d9b741ce574dd6db0c3dbf26a37fc208d66031963616733c1c2e071aa2"
Mar 27 19:50:51 ubuntu-20-agent-15 kubelet[787231]: E0327 19:50:51.308055 787231 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-vpxgx_gadget(43c5a10d-8c55-4d63-935b-1aaa886a793f)\"" pod="gadget/gadget-vpxgx" podUID="43c5a10d-8c55-4d63-935b-1aaa886a793f"
Mar 27 19:50:54 ubuntu-20-agent-15 kubelet[787231]: I0327 19:50:54.115393 787231 scope.go:117] "RemoveContainer" containerID="e6b3e5d9b741ce574dd6db0c3dbf26a37fc208d66031963616733c1c2e071aa2"
Mar 27 19:50:54 ubuntu-20-agent-15 kubelet[787231]: E0327 19:50:54.116059 787231 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-vpxgx_gadget(43c5a10d-8c55-4d63-935b-1aaa886a793f)\"" pod="gadget/gadget-vpxgx" podUID="43c5a10d-8c55-4d63-935b-1aaa886a793f"
Mar 27 19:51:08 ubuntu-20-agent-15 kubelet[787231]: I0327 19:51:08.747310 787231 scope.go:117] "RemoveContainer" containerID="e6b3e5d9b741ce574dd6db0c3dbf26a37fc208d66031963616733c1c2e071aa2"
Mar 27 19:51:08 ubuntu-20-agent-15 kubelet[787231]: E0327 19:51:08.747971 787231 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-vpxgx_gadget(43c5a10d-8c55-4d63-935b-1aaa886a793f)\"" pod="gadget/gadget-vpxgx" podUID="43c5a10d-8c55-4d63-935b-1aaa886a793f"
Mar 27 19:51:23 ubuntu-20-agent-15 kubelet[787231]: I0327 19:51:23.747044 787231 scope.go:117] "RemoveContainer" containerID="e6b3e5d9b741ce574dd6db0c3dbf26a37fc208d66031963616733c1c2e071aa2"
Mar 27 19:51:23 ubuntu-20-agent-15 kubelet[787231]: E0327 19:51:23.747454 787231 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-vpxgx_gadget(43c5a10d-8c55-4d63-935b-1aaa886a793f)\"" pod="gadget/gadget-vpxgx" podUID="43c5a10d-8c55-4d63-935b-1aaa886a793f"
Mar 27 19:51:28 ubuntu-20-agent-15 kubelet[787231]: I0327 19:51:28.259499 787231 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndrmj\" (UniqueName: \"kubernetes.io/projected/7e30047c-df90-44cb-b9a2-98b6574dd90f-kube-api-access-ndrmj\") pod \"7e30047c-df90-44cb-b9a2-98b6574dd90f\" (UID: \"7e30047c-df90-44cb-b9a2-98b6574dd90f\") "
Mar 27 19:51:28 ubuntu-20-agent-15 kubelet[787231]: I0327 19:51:28.261467 787231 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e30047c-df90-44cb-b9a2-98b6574dd90f-kube-api-access-ndrmj" (OuterVolumeSpecName: "kube-api-access-ndrmj") pod "7e30047c-df90-44cb-b9a2-98b6574dd90f" (UID: "7e30047c-df90-44cb-b9a2-98b6574dd90f"). InnerVolumeSpecName "kube-api-access-ndrmj". PluginName "kubernetes.io/projected", VolumeGidValue ""
Mar 27 19:51:28 ubuntu-20-agent-15 kubelet[787231]: I0327 19:51:28.360696 787231 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ndrmj\" (UniqueName: \"kubernetes.io/projected/7e30047c-df90-44cb-b9a2-98b6574dd90f-kube-api-access-ndrmj\") on node \"ubuntu-20-agent-15\" DevicePath \"\""
Mar 27 19:51:28 ubuntu-20-agent-15 kubelet[787231]: I0327 19:51:28.759334 787231 scope.go:117] "RemoveContainer" containerID="606866de2fb6b947a30e25cf44a8a27e2bbad24eb3e295d3944cae376520fd8e"
Mar 27 19:51:28 ubuntu-20-agent-15 kubelet[787231]: I0327 19:51:28.782057 787231 scope.go:117] "RemoveContainer" containerID="606866de2fb6b947a30e25cf44a8a27e2bbad24eb3e295d3944cae376520fd8e"
Mar 27 19:51:28 ubuntu-20-agent-15 kubelet[787231]: E0327 19:51:28.783293 787231 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 606866de2fb6b947a30e25cf44a8a27e2bbad24eb3e295d3944cae376520fd8e" containerID="606866de2fb6b947a30e25cf44a8a27e2bbad24eb3e295d3944cae376520fd8e"
Mar 27 19:51:28 ubuntu-20-agent-15 kubelet[787231]: I0327 19:51:28.783348 787231 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"606866de2fb6b947a30e25cf44a8a27e2bbad24eb3e295d3944cae376520fd8e"} err="failed to get container status \"606866de2fb6b947a30e25cf44a8a27e2bbad24eb3e295d3944cae376520fd8e\": rpc error: code = Unknown desc = Error response from daemon: No such container: 606866de2fb6b947a30e25cf44a8a27e2bbad24eb3e295d3944cae376520fd8e"
==> storage-provisioner [7d6b506e168d] <==
I0327 19:47:24.192155 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0327 19:47:24.208572 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0327 19:47:24.208624 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0327 19:47:24.218949 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0327 19:47:24.219876 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-15_ae19259a-d193-4ec1-8d06-fd4003ce563d!
I0327 19:47:24.220776 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5a637a87-f8d7-45ab-a0c1-c98ca435982f", APIVersion:"v1", ResourceVersion:"615", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-15_ae19259a-d193-4ec1-8d06-fd4003ce563d became leader
I0327 19:47:24.321945 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-15_ae19259a-d193-4ec1-8d06-fd4003ce563d!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run: kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (205.95s)