=== RUN TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath
=== CONT TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run: kubectl --context addons-662808 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run: kubectl --context addons-662808 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run: kubectl --context addons-662808 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run: kubectl --context addons-662808 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run: kubectl --context addons-662808 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run: kubectl --context addons-662808 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run: kubectl --context addons-662808 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [d80409e4-1900-4a8f-9c48-4e8e81479f9a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
addons_test.go:901: ***** TestAddons/parallel/LocalPath: pod "run=test-local-path" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:901: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-662808 -n addons-662808
addons_test.go:901: TestAddons/parallel/LocalPath: showing logs for failed pods as of 2025-04-07 12:55:37.189600628 +0000 UTC m=+486.935123929
addons_test.go:901: (dbg) Run: kubectl --context addons-662808 describe po test-local-path -n default
addons_test.go:901: (dbg) kubectl --context addons-662808 describe po test-local-path -n default:
Name: test-local-path
Namespace: default
Priority: 0
Service Account: default
Node: addons-662808/192.168.49.2
Start Time: Mon, 07 Apr 2025 12:52:36 +0000
Labels: run=test-local-path
Annotations: <none>
Status: Pending
IP: 10.244.0.36
IPs:
IP: 10.244.0.36
Containers:
busybox:
Container ID:
Image: busybox:stable
Image ID:
Port: <none>
Host Port: <none>
Command:
sh
-c
echo 'local-path-provisioner' > /test/file1
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/test from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5ffsn (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: test-pvc
ReadOnly: false
kube-api-access-5ffsn:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m1s default-scheduler Successfully assigned default/test-local-path to addons-662808
Warning Failed 98s (x4 over 3m) kubelet Failed to pull image "busybox:stable": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal BackOff 24s (x11 over 2m59s) kubelet Back-off pulling image "busybox:stable"
Warning Failed 24s (x11 over 2m59s) kubelet Error: ImagePullBackOff
Normal Pulling 9s (x5 over 3m) kubelet Pulling image "busybox:stable"
Warning Failed 9s (x5 over 3m) kubelet Error: ErrImagePull
Warning Failed 9s kubelet Failed to pull image "busybox:stable": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
addons_test.go:901: (dbg) Run: kubectl --context addons-662808 logs test-local-path -n default
addons_test.go:901: (dbg) Non-zero exit: kubectl --context addons-662808 logs test-local-path -n default: exit status 1 (69.649998ms)
** stderr **
Error from server (BadRequest): container "busybox" in pod "test-local-path" is waiting to start: trying and failing to pull image
** /stderr **
addons_test.go:901: kubectl --context addons-662808 logs test-local-path -n default: exit status 1
addons_test.go:902: failed waiting for test-local-path pod: run=test-local-path within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestAddons/parallel/LocalPath]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect addons-662808
helpers_test.go:235: (dbg) docker inspect addons-662808:
-- stdout --
[
{
"Id": "99376af8541bfcd1d208b4d57cccef4b5cb47011d904401f29109a155929b2ed",
"Created": "2025-04-07T12:48:05.508902626Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 775267,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-04-07T12:48:05.541753081Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:037bd1b5a0f63899880a74b20d0e40b693fd199ade4ed9b883be5ed5726d15a6",
"ResolvConfPath": "/var/lib/docker/containers/99376af8541bfcd1d208b4d57cccef4b5cb47011d904401f29109a155929b2ed/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/99376af8541bfcd1d208b4d57cccef4b5cb47011d904401f29109a155929b2ed/hostname",
"HostsPath": "/var/lib/docker/containers/99376af8541bfcd1d208b4d57cccef4b5cb47011d904401f29109a155929b2ed/hosts",
"LogPath": "/var/lib/docker/containers/99376af8541bfcd1d208b4d57cccef4b5cb47011d904401f29109a155929b2ed/99376af8541bfcd1d208b4d57cccef4b5cb47011d904401f29109a155929b2ed-json.log",
"Name": "/addons-662808",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"addons-662808:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {
"max-size": "100m"
}
},
"NetworkMode": "addons-662808",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4194304000,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8388608000,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "99376af8541bfcd1d208b4d57cccef4b5cb47011d904401f29109a155929b2ed",
"LowerDir": "/var/lib/docker/overlay2/a846772ad06386bb75dad5378a7df5577e11414d9a23e93d517b8eeb5bdf1dae-init/diff:/var/lib/docker/overlay2/4ad95e7f4a49b487176ca9dc3e3437ef3df8ea71a4a72c4a666a7db5084d5e6d/diff",
"MergedDir": "/var/lib/docker/overlay2/a846772ad06386bb75dad5378a7df5577e11414d9a23e93d517b8eeb5bdf1dae/merged",
"UpperDir": "/var/lib/docker/overlay2/a846772ad06386bb75dad5378a7df5577e11414d9a23e93d517b8eeb5bdf1dae/diff",
"WorkDir": "/var/lib/docker/overlay2/a846772ad06386bb75dad5378a7df5577e11414d9a23e93d517b8eeb5bdf1dae/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "addons-662808",
"Source": "/var/lib/docker/volumes/addons-662808/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "addons-662808",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "addons-662808",
"name.minikube.sigs.k8s.io": "addons-662808",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "7d1065e047604bb500bed362c0162463a013e167276aef9048480f5b852e254f",
"SandboxKey": "/var/run/docker/netns/7d1065e04760",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32768"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32769"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32772"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32770"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32771"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"addons-662808": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "3a:1f:37:9a:f9:52",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "87157fb96bf148188bf8cec10e52372da9869a32414022d777f4b879d54fa585",
"EndpointID": "0bf267d4e286ac9e4068c794eca5091e45e3f276a38dc22b011ce4e630715f66",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"addons-662808",
"99376af8541b"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-662808 -n addons-662808
helpers_test.go:244: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p addons-662808 logs -n 25
helpers_test.go:252: TestAddons/parallel/LocalPath logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| start | --download-only -p | download-docker-089924 | jenkins | v1.35.0 | 07 Apr 25 12:47 UTC | |
| | download-docker-089924 | | | | | |
| | --alsologtostderr | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p download-docker-089924 | download-docker-089924 | jenkins | v1.35.0 | 07 Apr 25 12:47 UTC | 07 Apr 25 12:47 UTC |
| start | --download-only -p | binary-mirror-222869 | jenkins | v1.35.0 | 07 Apr 25 12:47 UTC | |
| | binary-mirror-222869 | | | | | |
| | --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:43139 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p binary-mirror-222869 | binary-mirror-222869 | jenkins | v1.35.0 | 07 Apr 25 12:47 UTC | 07 Apr 25 12:47 UTC |
| addons | disable dashboard -p | addons-662808 | jenkins | v1.35.0 | 07 Apr 25 12:47 UTC | |
| | addons-662808 | | | | | |
| addons | enable dashboard -p | addons-662808 | jenkins | v1.35.0 | 07 Apr 25 12:47 UTC | |
| | addons-662808 | | | | | |
| start | -p addons-662808 --wait=true | addons-662808 | jenkins | v1.35.0 | 07 Apr 25 12:47 UTC | 07 Apr 25 12:51 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --addons=amd-gpu-device-plugin | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --addons=ingress | | | | | |
| | --addons=ingress-dns | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| addons | addons-662808 addons disable | addons-662808 | jenkins | v1.35.0 | 07 Apr 25 12:51 UTC | 07 Apr 25 12:51 UTC |
| | volcano --alsologtostderr -v=1 | | | | | |
| addons | addons-662808 addons disable | addons-662808 | jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:52 UTC |
| | gcp-auth --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| addons | enable headlamp | addons-662808 | jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:52 UTC |
| | -p addons-662808 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-662808 addons | addons-662808 | jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:52 UTC |
| | disable nvidia-device-plugin | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-662808 addons | addons-662808 | jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:52 UTC |
| | disable metrics-server | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-662808 addons disable | addons-662808 | jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:52 UTC |
| | headlamp --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| ip | addons-662808 ip | addons-662808 | jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:52 UTC |
| addons | addons-662808 addons disable | addons-662808 | jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:52 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| addons | addons-662808 addons | addons-662808 | jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:52 UTC |
| | disable inspektor-gadget | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-662808 addons | addons-662808 | jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:52 UTC |
| | disable cloud-spanner | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| ssh | addons-662808 ssh curl -s | addons-662808 | jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:52 UTC |
| | http://127.0.0.1/ -H 'Host: | | | | | |
| | nginx.example.com' | | | | | |
| ip | addons-662808 ip | addons-662808 | jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:52 UTC |
| addons | addons-662808 addons disable | addons-662808 | jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:52 UTC |
| | ingress-dns --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| addons | addons-662808 addons disable | addons-662808 | jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:52 UTC |
| | ingress --alsologtostderr -v=1 | | | | | |
| addons | addons-662808 addons disable | addons-662808 | jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:52 UTC |
| | amd-gpu-device-plugin | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-662808 addons | addons-662808 | jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:52 UTC |
| | disable volumesnapshots | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-662808 addons disable | addons-662808 | jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:52 UTC |
| | yakd --alsologtostderr -v=1 | | | | | |
| addons | addons-662808 addons | addons-662808 | jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:52 UTC |
| | disable csi-hostpath-driver | | | | | |
| | --alsologtostderr -v=1 | | | | | |
|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2025/04/07 12:47:42
Running on machine: ubuntu-20-agent-13
Binary: Built with gc go1.24.0 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0407 12:47:42.068395 774657 out.go:345] Setting OutFile to fd 1 ...
I0407 12:47:42.068917 774657 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:47:42.068935 774657 out.go:358] Setting ErrFile to fd 2...
I0407 12:47:42.068942 774657 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:47:42.069206 774657 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-766623/.minikube/bin
I0407 12:47:42.069837 774657 out.go:352] Setting JSON to false
I0407 12:47:42.070697 774657 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":73811,"bootTime":1743956251,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0407 12:47:42.070800 774657 start.go:139] virtualization: kvm guest
I0407 12:47:42.072634 774657 out.go:177] * [addons-662808] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
I0407 12:47:42.073939 774657 out.go:177] - MINIKUBE_LOCATION=20598
I0407 12:47:42.073941 774657 notify.go:220] Checking for updates...
I0407 12:47:42.075298 774657 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0407 12:47:42.076576 774657 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20598-766623/kubeconfig
I0407 12:47:42.077656 774657 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-766623/.minikube
I0407 12:47:42.078795 774657 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0407 12:47:42.079934 774657 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0407 12:47:42.081457 774657 driver.go:394] Setting default libvirt URI to qemu:///system
I0407 12:47:42.103349 774657 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
I0407 12:47:42.103496 774657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0407 12:47:42.151222 774657 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:46 SystemTime:2025-04-07 12:47:42.142912406 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0407 12:47:42.151320 774657 docker.go:318] overlay module found
I0407 12:47:42.153113 774657 out.go:177] * Using the docker driver based on user configuration
I0407 12:47:42.154438 774657 start.go:297] selected driver: docker
I0407 12:47:42.154454 774657 start.go:901] validating driver "docker" against <nil>
I0407 12:47:42.154466 774657 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0407 12:47:42.155175 774657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0407 12:47:42.203396 774657 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:46 SystemTime:2025-04-07 12:47:42.195402372 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0407 12:47:42.203632 774657 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0407 12:47:42.203839 774657 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0407 12:47:42.205386 774657 out.go:177] * Using Docker driver with root privileges
I0407 12:47:42.206475 774657 cni.go:84] Creating CNI manager for ""
I0407 12:47:42.206538 774657 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0407 12:47:42.206548 774657 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0407 12:47:42.206609 774657 start.go:340] cluster config:
{Name:addons-662808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-662808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0407 12:47:42.207789 774657 out.go:177] * Starting "addons-662808" primary control-plane node in "addons-662808" cluster
I0407 12:47:42.208797 774657 cache.go:121] Beginning downloading kic base image for docker with docker
I0407 12:47:42.209864 774657 out.go:177] * Pulling base image v0.0.46-1743675393-20591 ...
I0407 12:47:42.210872 774657 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
I0407 12:47:42.210905 774657 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local docker daemon
I0407 12:47:42.210918 774657 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20598-766623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
I0407 12:47:42.210927 774657 cache.go:56] Caching tarball of preloaded images
I0407 12:47:42.211004 774657 preload.go:172] Found /home/jenkins/minikube-integration/20598-766623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0407 12:47:42.211017 774657 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
I0407 12:47:42.211350 774657 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/config.json ...
I0407 12:47:42.211384 774657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/config.json: {Name:mk68064d92eeeab5e23dc5c9eec6bb53756c9e10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0407 12:47:42.226207 774657 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 to local cache
I0407 12:47:42.226309 774657 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local cache directory
I0407 12:47:42.226330 774657 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local cache directory, skipping pull
I0407 12:47:42.226336 774657 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 exists in cache, skipping pull
I0407 12:47:42.226348 774657 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 as a tarball
I0407 12:47:42.226359 774657 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 from local cache
I0407 12:47:54.176527 774657 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 from cached tarball
I0407 12:47:54.176572 774657 cache.go:230] Successfully downloaded all kic artifacts
I0407 12:47:54.176619 774657 start.go:360] acquireMachinesLock for addons-662808: {Name:mkbe122773630acbb9c50768cde9ae1b5a1617df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0407 12:47:54.176730 774657 start.go:364] duration metric: took 89.433µs to acquireMachinesLock for "addons-662808"
I0407 12:47:54.176760 774657 start.go:93] Provisioning new machine with config: &{Name:addons-662808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-662808 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0407 12:47:54.176918 774657 start.go:125] createHost starting for "" (driver="docker")
I0407 12:47:54.178582 774657 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
I0407 12:47:54.178883 774657 start.go:159] libmachine.API.Create for "addons-662808" (driver="docker")
I0407 12:47:54.178921 774657 client.go:168] LocalClient.Create starting
I0407 12:47:54.179033 774657 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20598-766623/.minikube/certs/ca.pem
I0407 12:47:54.442327 774657 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20598-766623/.minikube/certs/cert.pem
I0407 12:47:54.550841 774657 cli_runner.go:164] Run: docker network inspect addons-662808 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0407 12:47:54.566844 774657 cli_runner.go:211] docker network inspect addons-662808 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0407 12:47:54.566917 774657 network_create.go:284] running [docker network inspect addons-662808] to gather additional debugging logs...
I0407 12:47:54.566935 774657 cli_runner.go:164] Run: docker network inspect addons-662808
W0407 12:47:54.582079 774657 cli_runner.go:211] docker network inspect addons-662808 returned with exit code 1
I0407 12:47:54.582123 774657 network_create.go:287] error running [docker network inspect addons-662808]: docker network inspect addons-662808: exit status 1
stdout:
[]
stderr:
Error response from daemon: network addons-662808 not found
I0407 12:47:54.582146 774657 network_create.go:289] output of [docker network inspect addons-662808]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network addons-662808 not found
** /stderr **
I0407 12:47:54.582219 774657 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0407 12:47:54.598125 774657 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00167d130}
I0407 12:47:54.598182 774657 network_create.go:124] attempt to create docker network addons-662808 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0407 12:47:54.598253 774657 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-662808 addons-662808
I0407 12:47:54.646285 774657 network_create.go:108] docker network addons-662808 192.168.49.0/24 created
I0407 12:47:54.646329 774657 kic.go:121] calculated static IP "192.168.49.2" for the "addons-662808" container
I0407 12:47:54.646406 774657 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0407 12:47:54.661512 774657 cli_runner.go:164] Run: docker volume create addons-662808 --label name.minikube.sigs.k8s.io=addons-662808 --label created_by.minikube.sigs.k8s.io=true
I0407 12:47:54.677853 774657 oci.go:103] Successfully created a docker volume addons-662808
I0407 12:47:54.677933 774657 cli_runner.go:164] Run: docker run --rm --name addons-662808-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-662808 --entrypoint /usr/bin/test -v addons-662808:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 -d /var/lib
I0407 12:48:01.485343 774657 cli_runner.go:217] Completed: docker run --rm --name addons-662808-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-662808 --entrypoint /usr/bin/test -v addons-662808:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 -d /var/lib: (6.807348151s)
I0407 12:48:01.485394 774657 oci.go:107] Successfully prepared a docker volume addons-662808
I0407 12:48:01.485432 774657 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
I0407 12:48:01.485466 774657 kic.go:194] Starting extracting preloaded images to volume ...
I0407 12:48:01.485545 774657 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20598-766623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-662808:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 -I lz4 -xf /preloaded.tar -C /extractDir
I0407 12:48:05.446250 774657 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20598-766623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-662808:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 -I lz4 -xf /preloaded.tar -C /extractDir: (3.960640639s)
I0407 12:48:05.446283 774657 kic.go:203] duration metric: took 3.960814298s to extract preloaded images to volume ...
W0407 12:48:05.446433 774657 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0407 12:48:05.446552 774657 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0407 12:48:05.493309 774657 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-662808 --name addons-662808 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-662808 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-662808 --network addons-662808 --ip 192.168.49.2 --volume addons-662808:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727
I0407 12:48:05.774095 774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Running}}
I0407 12:48:05.792278 774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
I0407 12:48:05.810193 774657 cli_runner.go:164] Run: docker exec addons-662808 stat /var/lib/dpkg/alternatives/iptables
I0407 12:48:05.852163 774657 oci.go:144] the created container "addons-662808" has a running status.
I0407 12:48:05.852195 774657 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa...
I0407 12:48:05.978862 774657 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0407 12:48:06.000001 774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
I0407 12:48:06.020298 774657 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0407 12:48:06.020323 774657 kic_runner.go:114] Args: [docker exec --privileged addons-662808 chown docker:docker /home/docker/.ssh/authorized_keys]
I0407 12:48:06.060596 774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
I0407 12:48:06.079717 774657 machine.go:93] provisionDockerMachine start ...
I0407 12:48:06.079853 774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
I0407 12:48:06.099291 774657 main.go:141] libmachine: Using SSH client type: native
I0407 12:48:06.099639 774657 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0407 12:48:06.099675 774657 main.go:141] libmachine: About to run SSH command:
hostname
I0407 12:48:06.100553 774657 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59594->127.0.0.1:32768: read: connection reset by peer
I0407 12:48:09.222881 774657 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-662808
I0407 12:48:09.222916 774657 ubuntu.go:169] provisioning hostname "addons-662808"
I0407 12:48:09.222976 774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
I0407 12:48:09.239971 774657 main.go:141] libmachine: Using SSH client type: native
I0407 12:48:09.240210 774657 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0407 12:48:09.240228 774657 main.go:141] libmachine: About to run SSH command:
sudo hostname addons-662808 && echo "addons-662808" | sudo tee /etc/hostname
I0407 12:48:09.369973 774657 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-662808
I0407 12:48:09.370040 774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
I0407 12:48:09.386741 774657 main.go:141] libmachine: Using SSH client type: native
I0407 12:48:09.387013 774657 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0407 12:48:09.387038 774657 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-662808' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-662808/g' /etc/hosts;
else
echo '127.0.1.1 addons-662808' | sudo tee -a /etc/hosts;
fi
fi
I0407 12:48:09.507414 774657 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0407 12:48:09.507468 774657 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20598-766623/.minikube CaCertPath:/home/jenkins/minikube-integration/20598-766623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20598-766623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20598-766623/.minikube}
I0407 12:48:09.507500 774657 ubuntu.go:177] setting up certificates
I0407 12:48:09.507521 774657 provision.go:84] configureAuth start
I0407 12:48:09.507594 774657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-662808
I0407 12:48:09.524163 774657 provision.go:143] copyHostCerts
I0407 12:48:09.524248 774657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-766623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20598-766623/.minikube/key.pem (1675 bytes)
I0407 12:48:09.524361 774657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-766623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20598-766623/.minikube/ca.pem (1078 bytes)
I0407 12:48:09.524445 774657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-766623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20598-766623/.minikube/cert.pem (1123 bytes)
I0407 12:48:09.524501 774657 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20598-766623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20598-766623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20598-766623/.minikube/certs/ca-key.pem org=jenkins.addons-662808 san=[127.0.0.1 192.168.49.2 addons-662808 localhost minikube]
I0407 12:48:09.901216 774657 provision.go:177] copyRemoteCerts
I0407 12:48:09.901279 774657 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0407 12:48:09.901316 774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
I0407 12:48:09.918149 774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
I0407 12:48:10.008028 774657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-766623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0407 12:48:10.029806 774657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-766623/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0407 12:48:10.050777 774657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-766623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0407 12:48:10.071779 774657 provision.go:87] duration metric: took 564.238868ms to configureAuth
I0407 12:48:10.071812 774657 ubuntu.go:193] setting minikube options for container-runtime
I0407 12:48:10.071994 774657 config.go:182] Loaded profile config "addons-662808": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 12:48:10.072050 774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
I0407 12:48:10.088690 774657 main.go:141] libmachine: Using SSH client type: native
I0407 12:48:10.088919 774657 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0407 12:48:10.088931 774657 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0407 12:48:10.215747 774657 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0407 12:48:10.215783 774657 ubuntu.go:71] root file system type: overlay
I0407 12:48:10.215937 774657 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0407 12:48:10.216016 774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
I0407 12:48:10.232577 774657 main.go:141] libmachine: Using SSH client type: native
I0407 12:48:10.232838 774657 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0407 12:48:10.232945 774657 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0407 12:48:10.365819 774657 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0407 12:48:10.365904 774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
I0407 12:48:10.382220 774657 main.go:141] libmachine: Using SSH client type: native
I0407 12:48:10.382479 774657 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 127.0.0.1 32768 <nil> <nil>}
I0407 12:48:10.382499 774657 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0407 12:48:11.095723 774657 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2025-03-25 15:05:51.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2025-04-07 12:48:10.360505775 +0000
@@ -1,46 +1,49 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
+LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0407 12:48:11.095755 774657 machine.go:96] duration metric: took 5.016014488s to provisionDockerMachine
I0407 12:48:11.095767 774657 client.go:171] duration metric: took 16.916836853s to LocalClient.Create
I0407 12:48:11.095785 774657 start.go:167] duration metric: took 16.916902688s to libmachine.API.Create "addons-662808"
I0407 12:48:11.095792 774657 start.go:293] postStartSetup for "addons-662808" (driver="docker")
I0407 12:48:11.095802 774657 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0407 12:48:11.095866 774657 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0407 12:48:11.095907 774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
I0407 12:48:11.113234 774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
I0407 12:48:11.204318 774657 ssh_runner.go:195] Run: cat /etc/os-release
I0407 12:48:11.207292 774657 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0407 12:48:11.207320 774657 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0407 12:48:11.207328 774657 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0407 12:48:11.207334 774657 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0407 12:48:11.207344 774657 filesync.go:126] Scanning /home/jenkins/minikube-integration/20598-766623/.minikube/addons for local assets ...
I0407 12:48:11.207409 774657 filesync.go:126] Scanning /home/jenkins/minikube-integration/20598-766623/.minikube/files for local assets ...
I0407 12:48:11.207451 774657 start.go:296] duration metric: took 111.650938ms for postStartSetup
I0407 12:48:11.207755 774657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-662808
I0407 12:48:11.224375 774657 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/config.json ...
I0407 12:48:11.224597 774657 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0407 12:48:11.224637 774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
I0407 12:48:11.240462 774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
I0407 12:48:11.328174 774657 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0407 12:48:11.332136 774657 start.go:128] duration metric: took 17.155193027s to createHost
I0407 12:48:11.332163 774657 start.go:83] releasing machines lock for "addons-662808", held for 17.155414505s
I0407 12:48:11.332230 774657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-662808
I0407 12:48:11.348838 774657 ssh_runner.go:195] Run: cat /version.json
I0407 12:48:11.348875 774657 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0407 12:48:11.348886 774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
I0407 12:48:11.348945 774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
I0407 12:48:11.366682 774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
I0407 12:48:11.368377 774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
I0407 12:48:11.451021 774657 ssh_runner.go:195] Run: systemctl --version
I0407 12:48:11.524889 774657 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0407 12:48:11.529205 774657 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0407 12:48:11.551381 774657 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0407 12:48:11.551468 774657 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0407 12:48:11.574270 774657 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0407 12:48:11.574303 774657 start.go:495] detecting cgroup driver to use...
I0407 12:48:11.574340 774657 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0407 12:48:11.574460 774657 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0407 12:48:11.588418 774657 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0407 12:48:11.597085 774657 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0407 12:48:11.605784 774657 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0407 12:48:11.605842 774657 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0407 12:48:11.614567 774657 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0407 12:48:11.622869 774657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0407 12:48:11.631098 774657 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0407 12:48:11.639571 774657 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0407 12:48:11.647503 774657 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0407 12:48:11.655901 774657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0407 12:48:11.664489 774657 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0407 12:48:11.673027 774657 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0407 12:48:11.680114 774657 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0407 12:48:11.680157 774657 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0407 12:48:11.692387 774657 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0407 12:48:11.700571 774657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0407 12:48:11.774499 774657 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0407 12:48:11.857329 774657 start.go:495] detecting cgroup driver to use...
I0407 12:48:11.857443 774657 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0407 12:48:11.857518 774657 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0407 12:48:11.868417 774657 cruntime.go:279] skipping containerd shutdown because we are bound to it
I0407 12:48:11.868471 774657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0407 12:48:11.879128 774657 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0407 12:48:11.895683 774657 ssh_runner.go:195] Run: which cri-dockerd
I0407 12:48:11.899010 774657 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0407 12:48:11.908005 774657 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I0407 12:48:11.929858 774657 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0407 12:48:12.029791 774657 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0407 12:48:12.121575 774657 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0407 12:48:12.121717 774657 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0407 12:48:12.138730 774657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0407 12:48:12.220571 774657 ssh_runner.go:195] Run: sudo systemctl restart docker
I0407 12:48:12.500686 774657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0407 12:48:12.511702 774657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0407 12:48:12.522103 774657 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0407 12:48:12.602261 774657 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0407 12:48:12.679105 774657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0407 12:48:12.750592 774657 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0407 12:48:12.762496 774657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0407 12:48:12.771946 774657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0407 12:48:12.846916 774657 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0407 12:48:12.905360 774657 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0407 12:48:12.905445 774657 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0407 12:48:12.909355 774657 start.go:563] Will wait 60s for crictl version
I0407 12:48:12.909419 774657 ssh_runner.go:195] Run: which crictl
I0407 12:48:12.912591 774657 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0407 12:48:12.944347 774657 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 28.0.4
RuntimeApiVersion: v1
I0407 12:48:12.944423 774657 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0407 12:48:12.967365 774657 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0407 12:48:12.992089 774657 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 28.0.4 ...
I0407 12:48:12.992170 774657 cli_runner.go:164] Run: docker network inspect addons-662808 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0407 12:48:13.008275 774657 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I0407 12:48:13.011925 774657 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0407 12:48:13.021905 774657 kubeadm.go:883] updating cluster {Name:addons-662808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-662808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0407 12:48:13.022022 774657 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
I0407 12:48:13.022076 774657 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0407 12:48:13.040590 774657 docker.go:689] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.32.2
registry.k8s.io/kube-proxy:v1.32.2
registry.k8s.io/kube-controller-manager:v1.32.2
registry.k8s.io/kube-scheduler:v1.32.2
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/coredns/coredns:v1.11.3
registry.k8s.io/pause:3.10
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0407 12:48:13.040627 774657 docker.go:619] Images already preloaded, skipping extraction
I0407 12:48:13.040707 774657 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0407 12:48:13.059755 774657 docker.go:689] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.32.2
registry.k8s.io/kube-scheduler:v1.32.2
registry.k8s.io/kube-proxy:v1.32.2
registry.k8s.io/kube-controller-manager:v1.32.2
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/coredns/coredns:v1.11.3
registry.k8s.io/pause:3.10
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0407 12:48:13.059778 774657 cache_images.go:84] Images are preloaded, skipping loading
I0407 12:48:13.059789 774657 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.32.2 docker true true} ...
I0407 12:48:13.059876 774657 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-662808 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.32.2 ClusterName:addons-662808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0407 12:48:13.059925 774657 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0407 12:48:13.103155 774657 cni.go:84] Creating CNI manager for ""
I0407 12:48:13.103191 774657 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0407 12:48:13.103214 774657 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0407 12:48:13.103250 774657 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-662808 NodeName:addons-662808 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0407 12:48:13.103399 774657 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "addons-662808"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.49.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0407 12:48:13.103497 774657 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
I0407 12:48:13.111876 774657 binaries.go:44] Found k8s binaries, skipping transfer
I0407 12:48:13.111938 774657 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0407 12:48:13.119863 774657 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
I0407 12:48:13.135667 774657 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0407 12:48:13.151175 774657 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2291 bytes)
I0407 12:48:13.166535 774657 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0407 12:48:13.169415 774657 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0407 12:48:13.178899 774657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0407 12:48:13.251549 774657 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0407 12:48:13.263762 774657 certs.go:68] Setting up /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808 for IP: 192.168.49.2
I0407 12:48:13.263789 774657 certs.go:194] generating shared ca certs ...
I0407 12:48:13.263809 774657 certs.go:226] acquiring lock for ca certs: {Name:mk3cba72d8e0a281d2351f9394ddea5be5fe0baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0407 12:48:13.263953 774657 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20598-766623/.minikube/ca.key
I0407 12:48:13.385095 774657 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20598-766623/.minikube/ca.crt ...
I0407 12:48:13.385127 774657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-766623/.minikube/ca.crt: {Name:mk893306cac75a6632c3479250f37deaf8ffa61c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0407 12:48:13.385288 774657 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20598-766623/.minikube/ca.key ...
I0407 12:48:13.385299 774657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-766623/.minikube/ca.key: {Name:mk83feba8c710103c1fe8fbb1c81e9479f11811c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0407 12:48:13.385374 774657 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20598-766623/.minikube/proxy-client-ca.key
I0407 12:48:13.571423 774657 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20598-766623/.minikube/proxy-client-ca.crt ...
I0407 12:48:13.571463 774657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-766623/.minikube/proxy-client-ca.crt: {Name:mk652d34a62049fb2318a1e2d757c0f3d3e66935 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0407 12:48:13.571622 774657 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20598-766623/.minikube/proxy-client-ca.key ...
I0407 12:48:13.571633 774657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-766623/.minikube/proxy-client-ca.key: {Name:mk87f2f162271f48556e1a0132d00f5c3334cf9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0407 12:48:13.571708 774657 certs.go:256] generating profile certs ...
I0407 12:48:13.571768 774657 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/client.key
I0407 12:48:13.571782 774657 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/client.crt with IP's: []
I0407 12:48:13.714856 774657 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/client.crt ...
I0407 12:48:13.714889 774657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/client.crt: {Name:mkaeccfcdf311f349b781893ab0111a9d65c2f5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0407 12:48:13.715046 774657 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/client.key ...
I0407 12:48:13.715056 774657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/client.key: {Name:mk19c5cf6bbeb319c6c58793b41bf171751bcb5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0407 12:48:13.715124 774657 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/apiserver.key.dbcbf223
I0407 12:48:13.715140 774657 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/apiserver.crt.dbcbf223 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I0407 12:48:13.804982 774657 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/apiserver.crt.dbcbf223 ...
I0407 12:48:13.805015 774657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/apiserver.crt.dbcbf223: {Name:mk19af499a320d6f1c26a50a80f1d200d7606753 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0407 12:48:13.805167 774657 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/apiserver.key.dbcbf223 ...
I0407 12:48:13.805179 774657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/apiserver.key.dbcbf223: {Name:mk5087366c34e430720a75aac8d18b6e58a3291c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0407 12:48:13.805251 774657 certs.go:381] copying /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/apiserver.crt.dbcbf223 -> /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/apiserver.crt
I0407 12:48:13.805319 774657 certs.go:385] copying /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/apiserver.key.dbcbf223 -> /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/apiserver.key
I0407 12:48:13.805363 774657 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/proxy-client.key
I0407 12:48:13.805380 774657 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/proxy-client.crt with IP's: []
I0407 12:48:14.015475 774657 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/proxy-client.crt ...
I0407 12:48:14.015504 774657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/proxy-client.crt: {Name:mk2606e62891c4f956d85a735a11ca5c61fbfb7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0407 12:48:14.015657 774657 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/proxy-client.key ...
I0407 12:48:14.015669 774657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/proxy-client.key: {Name:mk764917d9e3a7c7de356c4db8485bab79055b08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0407 12:48:14.015836 774657 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-766623/.minikube/certs/ca-key.pem (1675 bytes)
I0407 12:48:14.015877 774657 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-766623/.minikube/certs/ca.pem (1078 bytes)
I0407 12:48:14.015908 774657 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-766623/.minikube/certs/cert.pem (1123 bytes)
I0407 12:48:14.015931 774657 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-766623/.minikube/certs/key.pem (1675 bytes)
I0407 12:48:14.016584 774657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-766623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0407 12:48:14.039027 774657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-766623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0407 12:48:14.059946 774657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-766623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0407 12:48:14.081162 774657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-766623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0407 12:48:14.101962 774657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I0407 12:48:14.122410 774657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0407 12:48:14.143076 774657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0407 12:48:14.163802 774657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0407 12:48:14.184208 774657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-766623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0407 12:48:14.204596 774657 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0407 12:48:14.219629 774657 ssh_runner.go:195] Run: openssl version
I0407 12:48:14.224303 774657 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0407 12:48:14.232260 774657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0407 12:48:14.235109 774657 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 7 12:48 /usr/share/ca-certificates/minikubeCA.pem
I0407 12:48:14.235164 774657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0407 12:48:14.240988 774657 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0407 12:48:14.248921 774657 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0407 12:48:14.251726 774657 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0407 12:48:14.251780 774657 kubeadm.go:392] StartCluster: {Name:addons-662808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-662808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0407 12:48:14.251894 774657 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0407 12:48:14.269703 774657 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0407 12:48:14.277838 774657 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0407 12:48:14.285630 774657 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I0407 12:48:14.285692 774657 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0407 12:48:14.293208 774657 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0407 12:48:14.293223 774657 kubeadm.go:157] found existing configuration files:
I0407 12:48:14.293261 774657 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0407 12:48:14.300884 774657 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0407 12:48:14.300936 774657 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0407 12:48:14.308203 774657 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0407 12:48:14.315540 774657 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0407 12:48:14.315590 774657 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0407 12:48:14.322564 774657 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0407 12:48:14.329829 774657 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0407 12:48:14.329874 774657 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0407 12:48:14.336995 774657 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0407 12:48:14.344443 774657 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0407 12:48:14.344487 774657 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0407 12:48:14.351513 774657 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0407 12:48:14.387769 774657 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
I0407 12:48:14.387858 774657 kubeadm.go:310] [preflight] Running pre-flight checks
I0407 12:48:14.407746 774657 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
I0407 12:48:14.407849 774657 kubeadm.go:310] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1078-gcp[0m
I0407 12:48:14.407905 774657 kubeadm.go:310] [0;37mOS[0m: [0;32mLinux[0m
I0407 12:48:14.407947 774657 kubeadm.go:310] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0407 12:48:14.407989 774657 kubeadm.go:310] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0407 12:48:14.408029 774657 kubeadm.go:310] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0407 12:48:14.408071 774657 kubeadm.go:310] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0407 12:48:14.408128 774657 kubeadm.go:310] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0407 12:48:14.408177 774657 kubeadm.go:310] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0407 12:48:14.408215 774657 kubeadm.go:310] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0407 12:48:14.408260 774657 kubeadm.go:310] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0407 12:48:14.408330 774657 kubeadm.go:310] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0407 12:48:14.458678 774657 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0407 12:48:14.458822 774657 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0407 12:48:14.458940 774657 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0407 12:48:14.468870 774657 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0407 12:48:14.472046 774657 out.go:235] - Generating certificates and keys ...
I0407 12:48:14.472133 774657 kubeadm.go:310] [certs] Using existing ca certificate authority
I0407 12:48:14.472214 774657 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0407 12:48:14.594103 774657 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0407 12:48:14.852878 774657 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0407 12:48:14.915355 774657 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0407 12:48:15.009571 774657 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0407 12:48:15.338372 774657 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0407 12:48:15.338553 774657 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-662808 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0407 12:48:15.779584 774657 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0407 12:48:15.779738 774657 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-662808 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0407 12:48:16.148172 774657 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0407 12:48:16.221492 774657 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0407 12:48:16.326257 774657 kubeadm.go:310] [certs] Generating "sa" key and public key
I0407 12:48:16.326322 774657 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0407 12:48:16.567973 774657 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0407 12:48:16.769462 774657 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0407 12:48:16.891601 774657 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0407 12:48:17.204961 774657 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0407 12:48:17.402326 774657 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0407 12:48:17.402836 774657 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0407 12:48:17.405249 774657 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0407 12:48:17.407107 774657 out.go:235] - Booting up control plane ...
I0407 12:48:17.407216 774657 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0407 12:48:17.407324 774657 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0407 12:48:17.407994 774657 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0407 12:48:17.417431 774657 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0407 12:48:17.422712 774657 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0407 12:48:17.422758 774657 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0407 12:48:17.508817 774657 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0407 12:48:17.508966 774657 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0407 12:48:18.010356 774657 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.641646ms
I0407 12:48:18.010474 774657 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0407 12:48:22.512227 774657 kubeadm.go:310] [api-check] The API server is healthy after 4.501847067s
I0407 12:48:22.523764 774657 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0407 12:48:22.532510 774657 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0407 12:48:22.547204 774657 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0407 12:48:22.547409 774657 kubeadm.go:310] [mark-control-plane] Marking the node addons-662808 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0407 12:48:22.553911 774657 kubeadm.go:310] [bootstrap-token] Using token: l33d72.k4e0y92fadibmgkp
I0407 12:48:22.555213 774657 out.go:235] - Configuring RBAC rules ...
I0407 12:48:22.555345 774657 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0407 12:48:22.558861 774657 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0407 12:48:22.564088 774657 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0407 12:48:22.566404 774657 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0407 12:48:22.568810 774657 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0407 12:48:22.571020 774657 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0407 12:48:22.918327 774657 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0407 12:48:23.343029 774657 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0407 12:48:23.919633 774657 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0407 12:48:23.921664 774657 kubeadm.go:310]
I0407 12:48:23.921735 774657 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0407 12:48:23.921746 774657 kubeadm.go:310]
I0407 12:48:23.921838 774657 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0407 12:48:23.921849 774657 kubeadm.go:310]
I0407 12:48:23.921869 774657 kubeadm.go:310] mkdir -p $HOME/.kube
I0407 12:48:23.921931 774657 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0407 12:48:23.922002 774657 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0407 12:48:23.922011 774657 kubeadm.go:310]
I0407 12:48:23.922082 774657 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0407 12:48:23.922091 774657 kubeadm.go:310]
I0407 12:48:23.922169 774657 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0407 12:48:23.922178 774657 kubeadm.go:310]
I0407 12:48:23.922234 774657 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0407 12:48:23.922358 774657 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0407 12:48:23.922469 774657 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0407 12:48:23.922479 774657 kubeadm.go:310]
I0407 12:48:23.922586 774657 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0407 12:48:23.922690 774657 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0407 12:48:23.922698 774657 kubeadm.go:310]
I0407 12:48:23.922771 774657 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token l33d72.k4e0y92fadibmgkp \
I0407 12:48:23.922876 774657 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:297cd9f04e1377b467e784b4e94a115886bd24e4049f300320a16578be94ae88 \
I0407 12:48:23.922895 774657 kubeadm.go:310] --control-plane
I0407 12:48:23.922899 774657 kubeadm.go:310]
I0407 12:48:23.923009 774657 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0407 12:48:23.923019 774657 kubeadm.go:310]
I0407 12:48:23.923122 774657 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token l33d72.k4e0y92fadibmgkp \
I0407 12:48:23.923278 774657 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:297cd9f04e1377b467e784b4e94a115886bd24e4049f300320a16578be94ae88
I0407 12:48:23.925440 774657 kubeadm.go:310] [WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
I0407 12:48:23.925722 774657 kubeadm.go:310] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
I0407 12:48:23.925871 774657 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0407 12:48:23.925897 774657 cni.go:84] Creating CNI manager for ""
I0407 12:48:23.925923 774657 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0407 12:48:23.927525 774657 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0407 12:48:23.928622 774657 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0407 12:48:23.937477 774657 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0407 12:48:23.953964 774657 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0407 12:48:23.954032 774657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0407 12:48:23.954064 774657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-662808 minikube.k8s.io/updated_at=2025_04_07T12_48_23_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277 minikube.k8s.io/name=addons-662808 minikube.k8s.io/primary=true
I0407 12:48:23.960981 774657 ops.go:34] apiserver oom_adj: -16
I0407 12:48:24.044483 774657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0407 12:48:24.545263 774657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0407 12:48:25.044824 774657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0407 12:48:25.545441 774657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0407 12:48:26.045607 774657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0407 12:48:26.544950 774657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0407 12:48:27.045254 774657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0407 12:48:27.544930 774657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0407 12:48:28.044632 774657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0407 12:48:28.545386 774657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0407 12:48:28.625308 774657 kubeadm.go:1113] duration metric: took 4.671334465s to wait for elevateKubeSystemPrivileges
I0407 12:48:28.625346 774657 kubeadm.go:394] duration metric: took 14.373570989s to StartCluster
I0407 12:48:28.625377 774657 settings.go:142] acquiring lock: {Name:mke7ff97dc38733275c7b62a22ebd9966fea8bd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0407 12:48:28.625506 774657 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20598-766623/kubeconfig
I0407 12:48:28.625978 774657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-766623/kubeconfig: {Name:mkaac003ac5f75e318e3728115e1b4b0fe8249ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0407 12:48:28.626187 774657 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0407 12:48:28.626248 774657 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0407 12:48:28.626346 774657 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I0407 12:48:28.626454 774657 config.go:182] Loaded profile config "addons-662808": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 12:48:28.626482 774657 addons.go:69] Setting yakd=true in profile "addons-662808"
I0407 12:48:28.626483 774657 addons.go:69] Setting default-storageclass=true in profile "addons-662808"
I0407 12:48:28.626503 774657 addons.go:238] Setting addon yakd=true in "addons-662808"
I0407 12:48:28.626509 774657 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-662808"
I0407 12:48:28.626515 774657 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-662808"
I0407 12:48:28.626517 774657 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-662808"
I0407 12:48:28.626534 774657 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-662808"
I0407 12:48:28.626547 774657 host.go:66] Checking if "addons-662808" exists ...
I0407 12:48:28.626554 774657 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-662808"
I0407 12:48:28.626563 774657 host.go:66] Checking if "addons-662808" exists ...
I0407 12:48:28.626533 774657 addons.go:69] Setting cloud-spanner=true in profile "addons-662808"
I0407 12:48:28.626591 774657 addons.go:238] Setting addon cloud-spanner=true in "addons-662808"
I0407 12:48:28.626610 774657 host.go:66] Checking if "addons-662808" exists ...
I0407 12:48:28.626662 774657 host.go:66] Checking if "addons-662808" exists ...
I0407 12:48:28.626750 774657 addons.go:69] Setting ingress-dns=true in profile "addons-662808"
I0407 12:48:28.626796 774657 addons.go:238] Setting addon ingress-dns=true in "addons-662808"
I0407 12:48:28.626841 774657 host.go:66] Checking if "addons-662808" exists ...
I0407 12:48:28.626954 774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
I0407 12:48:28.627100 774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
I0407 12:48:28.627105 774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
I0407 12:48:28.627110 774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
I0407 12:48:28.627123 774657 addons.go:69] Setting metrics-server=true in profile "addons-662808"
I0407 12:48:28.627136 774657 addons.go:238] Setting addon metrics-server=true in "addons-662808"
I0407 12:48:28.627171 774657 addons.go:69] Setting gcp-auth=true in profile "addons-662808"
I0407 12:48:28.627193 774657 mustload.go:65] Loading cluster: addons-662808
I0407 12:48:28.627292 774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
I0407 12:48:28.627353 774657 config.go:182] Loaded profile config "addons-662808": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 12:48:28.627379 774657 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-662808"
I0407 12:48:28.627414 774657 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-662808"
I0407 12:48:28.627609 774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
I0407 12:48:28.627114 774657 addons.go:69] Setting inspektor-gadget=true in profile "addons-662808"
I0407 12:48:28.627765 774657 addons.go:238] Setting addon inspektor-gadget=true in "addons-662808"
I0407 12:48:28.627795 774657 host.go:66] Checking if "addons-662808" exists ...
I0407 12:48:28.627808 774657 host.go:66] Checking if "addons-662808" exists ...
I0407 12:48:28.627928 774657 addons.go:69] Setting ingress=true in profile "addons-662808"
I0407 12:48:28.627947 774657 addons.go:238] Setting addon ingress=true in "addons-662808"
I0407 12:48:28.627988 774657 host.go:66] Checking if "addons-662808" exists ...
I0407 12:48:28.628222 774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
I0407 12:48:28.628257 774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
I0407 12:48:28.628482 774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
I0407 12:48:28.628647 774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
I0407 12:48:28.628801 774657 out.go:177] * Verifying Kubernetes components...
I0407 12:48:28.630019 774657 addons.go:69] Setting registry=true in profile "addons-662808"
I0407 12:48:28.630050 774657 addons.go:238] Setting addon registry=true in "addons-662808"
I0407 12:48:28.630078 774657 host.go:66] Checking if "addons-662808" exists ...
I0407 12:48:28.630310 774657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0407 12:48:28.630500 774657 addons.go:69] Setting storage-provisioner=true in profile "addons-662808"
I0407 12:48:28.630558 774657 addons.go:238] Setting addon storage-provisioner=true in "addons-662808"
I0407 12:48:28.630591 774657 host.go:66] Checking if "addons-662808" exists ...
I0407 12:48:28.630765 774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
I0407 12:48:28.631091 774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
I0407 12:48:28.633826 774657 addons.go:69] Setting volumesnapshots=true in profile "addons-662808"
I0407 12:48:28.633856 774657 addons.go:238] Setting addon volumesnapshots=true in "addons-662808"
I0407 12:48:28.633877 774657 addons.go:69] Setting volcano=true in profile "addons-662808"
I0407 12:48:28.633896 774657 addons.go:238] Setting addon volcano=true in "addons-662808"
I0407 12:48:28.633897 774657 host.go:66] Checking if "addons-662808" exists ...
I0407 12:48:28.633936 774657 host.go:66] Checking if "addons-662808" exists ...
I0407 12:48:28.627102 774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
I0407 12:48:28.634412 774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
I0407 12:48:28.634420 774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
I0407 12:48:28.626504 774657 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-662808"
I0407 12:48:28.635085 774657 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-662808"
I0407 12:48:28.635134 774657 host.go:66] Checking if "addons-662808" exists ...
I0407 12:48:28.635822 774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
I0407 12:48:28.666847 774657 out.go:177] - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
I0407 12:48:28.667804 774657 addons.go:238] Setting addon default-storageclass=true in "addons-662808"
I0407 12:48:28.667915 774657 host.go:66] Checking if "addons-662808" exists ...
I0407 12:48:28.668261 774657 out.go:177] - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
I0407 12:48:28.668464 774657 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I0407 12:48:28.668486 774657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
I0407 12:48:28.668545 774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
I0407 12:48:28.668856 774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
I0407 12:48:28.670044 774657 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I0407 12:48:28.670067 774657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
I0407 12:48:28.670140 774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
I0407 12:48:28.672008 774657 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I0407 12:48:28.672576 774657 host.go:66] Checking if "addons-662808" exists ...
I0407 12:48:28.673159 774657 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
I0407 12:48:28.673180 774657 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I0407 12:48:28.673248 774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
I0407 12:48:28.676074 774657 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
I0407 12:48:28.677203 774657 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
I0407 12:48:28.677228 774657 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
I0407 12:48:28.677309 774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
I0407 12:48:28.699203 774657 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.30
I0407 12:48:28.699204 774657 out.go:177] - Using image docker.io/registry:2.8.3
I0407 12:48:28.700557 774657 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
I0407 12:48:28.700642 774657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I0407 12:48:28.700777 774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
I0407 12:48:28.702223 774657 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
I0407 12:48:28.703805 774657 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
I0407 12:48:28.703832 774657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I0407 12:48:28.703910 774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
I0407 12:48:28.706850 774657 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I0407 12:48:28.708357 774657 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0407 12:48:28.708381 774657 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0407 12:48:28.708491 774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
I0407 12:48:28.718559 774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
I0407 12:48:28.729455 774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
I0407 12:48:28.730403 774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
I0407 12:48:28.739204 774657 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I0407 12:48:28.741231 774657 out.go:177] - Using image docker.io/volcanosh/vc-webhook-manager:v1.11.0
I0407 12:48:28.746027 774657 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I0407 12:48:28.746284 774657 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I0407 12:48:28.747197 774657 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0407 12:48:28.747230 774657 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0407 12:48:28.747306 774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
I0407 12:48:28.747587 774657 out.go:177] - Using image docker.io/volcanosh/vc-controller-manager:v1.11.0
I0407 12:48:28.748719 774657 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I0407 12:48:28.748838 774657 out.go:177] - Using image docker.io/volcanosh/vc-scheduler:v1.11.0
I0407 12:48:28.751584 774657 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
I0407 12:48:28.751612 774657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (480278 bytes)
I0407 12:48:28.751675 774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
I0407 12:48:28.752989 774657 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I0407 12:48:28.753980 774657 out.go:177] - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
I0407 12:48:28.754101 774657 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0407 12:48:28.755112 774657 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I0407 12:48:28.755373 774657 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0407 12:48:28.755408 774657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0407 12:48:28.755505 774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
I0407 12:48:28.757281 774657 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I0407 12:48:28.757373 774657 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
I0407 12:48:28.758278 774657 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I0407 12:48:28.759165 774657 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0407 12:48:28.759205 774657 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0407 12:48:28.759308 774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
I0407 12:48:28.761710 774657 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
I0407 12:48:28.762185 774657 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I0407 12:48:28.763101 774657 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
I0407 12:48:28.763121 774657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I0407 12:48:28.763172 774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
I0407 12:48:28.763407 774657 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0407 12:48:28.763418 774657 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0407 12:48:28.763486 774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
I0407 12:48:28.764899 774657 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
I0407 12:48:28.766758 774657 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0407 12:48:28.766780 774657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I0407 12:48:28.766835 774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
I0407 12:48:28.767531 774657 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-662808"
I0407 12:48:28.767577 774657 host.go:66] Checking if "addons-662808" exists ...
I0407 12:48:28.768073 774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
I0407 12:48:28.771200 774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
I0407 12:48:28.777529 774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
I0407 12:48:28.788413 774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
I0407 12:48:28.788578 774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
I0407 12:48:28.794916 774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
I0407 12:48:28.799092 774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
I0407 12:48:28.800676 774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
I0407 12:48:28.801432 774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
I0407 12:48:28.807353 774657 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I0407 12:48:28.809389 774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
I0407 12:48:28.810360 774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
I0407 12:48:28.810527 774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
I0407 12:48:28.811501 774657 out.go:177] - Using image docker.io/busybox:stable
I0407 12:48:28.812549 774657 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0407 12:48:28.812564 774657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I0407 12:48:28.812606 774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
W0407 12:48:28.825320 774657 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
I0407 12:48:28.825359 774657 retry.go:31] will retry after 246.271386ms: ssh: handshake failed: EOF
W0407 12:48:28.825492 774657 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
I0407 12:48:28.825501 774657 retry.go:31] will retry after 182.340587ms: ssh: handshake failed: EOF
W0407 12:48:28.825571 774657 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
I0407 12:48:28.825578 774657 retry.go:31] will retry after 141.596119ms: ssh: handshake failed: EOF
I0407 12:48:28.848810 774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
I0407 12:48:28.939301 774657 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0407 12:48:28.939484 774657 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0407 12:48:28.951701 774657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I0407 12:48:29.133877 774657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0407 12:48:29.223182 774657 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0407 12:48:29.223202 774657 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
I0407 12:48:29.223215 774657 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0407 12:48:29.223220 774657 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I0407 12:48:29.235955 774657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I0407 12:48:29.321669 774657 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
I0407 12:48:29.321701 774657 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0407 12:48:29.331537 774657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
I0407 12:48:29.340720 774657 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0407 12:48:29.340820 774657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0407 12:48:29.425616 774657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0407 12:48:29.438738 774657 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
I0407 12:48:29.438934 774657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
I0407 12:48:29.524634 774657 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0407 12:48:29.524735 774657 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0407 12:48:29.527528 774657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0407 12:48:29.620132 774657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0407 12:48:29.621559 774657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I0407 12:48:29.635787 774657 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
I0407 12:48:29.635872 774657 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I0407 12:48:29.638999 774657 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0407 12:48:29.639076 774657 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0407 12:48:29.821788 774657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0407 12:48:29.823746 774657 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
I0407 12:48:29.823819 774657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I0407 12:48:29.934385 774657 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I0407 12:48:29.934473 774657 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I0407 12:48:29.941472 774657 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
I0407 12:48:29.941553 774657 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I0407 12:48:30.022260 774657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I0407 12:48:30.022922 774657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0407 12:48:30.035541 774657 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0407 12:48:30.035661 774657 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0407 12:48:30.324433 774657 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0407 12:48:30.324472 774657 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I0407 12:48:30.330612 774657 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
I0407 12:48:30.330701 774657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I0407 12:48:30.336971 774657 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0407 12:48:30.337064 774657 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0407 12:48:30.624907 774657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0407 12:48:30.635244 774657 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0407 12:48:30.635281 774657 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I0407 12:48:30.831223 774657 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0407 12:48:30.831309 774657 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I0407 12:48:30.920898 774657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I0407 12:48:31.141606 774657 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.202203959s)
I0407 12:48:31.141787 774657 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
I0407 12:48:31.141730 774657 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.20219511s)
I0407 12:48:31.142782 774657 node_ready.go:35] waiting up to 6m0s for node "addons-662808" to be "Ready" ...
I0407 12:48:31.222708 774657 node_ready.go:49] node "addons-662808" has status "Ready":"True"
I0407 12:48:31.222798 774657 node_ready.go:38] duration metric: took 79.940802ms for node "addons-662808" to be "Ready" ...
I0407 12:48:31.222822 774657 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0407 12:48:31.230444 774657 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-l66rh" in "kube-system" namespace to be "Ready" ...
I0407 12:48:31.436557 774657 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0407 12:48:31.436585 774657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I0407 12:48:31.631451 774657 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0407 12:48:31.631494 774657 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I0407 12:48:31.723584 774657 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-662808" context rescaled to 1 replicas
I0407 12:48:32.338068 774657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0407 12:48:32.421899 774657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.470098124s)
I0407 12:48:32.435674 774657 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0407 12:48:32.435786 774657 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I0407 12:48:32.942884 774657 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0407 12:48:32.942914 774657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I0407 12:48:33.038500 774657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.904567167s)
I0407 12:48:33.038590 774657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.802545106s)
I0407 12:48:33.237599 774657 pod_ready.go:103] pod "amd-gpu-device-plugin-l66rh" in "kube-system" namespace has status "Ready":"False"
I0407 12:48:33.425428 774657 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0407 12:48:33.425461 774657 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I0407 12:48:34.032752 774657 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0407 12:48:34.032783 774657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I0407 12:48:34.225272 774657 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0407 12:48:34.225365 774657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I0407 12:48:34.730262 774657 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0407 12:48:34.730354 774657 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0407 12:48:35.121048 774657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0407 12:48:35.430257 774657 pod_ready.go:103] pod "amd-gpu-device-plugin-l66rh" in "kube-system" namespace has status "Ready":"False"
I0407 12:48:35.531930 774657 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0407 12:48:35.532084 774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
I0407 12:48:35.654178 774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
I0407 12:48:36.544079 774657 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0407 12:48:36.932625 774657 addons.go:238] Setting addon gcp-auth=true in "addons-662808"
I0407 12:48:36.932693 774657 host.go:66] Checking if "addons-662808" exists ...
I0407 12:48:36.933244 774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
I0407 12:48:36.955903 774657 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I0407 12:48:36.955956 774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
I0407 12:48:36.972680 774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
I0407 12:48:37.737260 774657 pod_ready.go:103] pod "amd-gpu-device-plugin-l66rh" in "kube-system" namespace has status "Ready":"False"
I0407 12:48:40.241200 774657 pod_ready.go:103] pod "amd-gpu-device-plugin-l66rh" in "kube-system" namespace has status "Ready":"False"
I0407 12:48:41.737600 774657 pod_ready.go:93] pod "amd-gpu-device-plugin-l66rh" in "kube-system" namespace has status "Ready":"True"
I0407 12:48:41.737631 774657 pod_ready.go:82] duration metric: took 10.507083363s for pod "amd-gpu-device-plugin-l66rh" in "kube-system" namespace to be "Ready" ...
I0407 12:48:41.737651 774657 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-2kx5j" in "kube-system" namespace to be "Ready" ...
I0407 12:48:41.741782 774657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (12.410207428s)
I0407 12:48:41.741860 774657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (12.316161256s)
I0407 12:48:41.742148 774657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (12.214548707s)
I0407 12:48:41.742357 774657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (12.122116069s)
I0407 12:48:41.742499 774657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (12.120873847s)
I0407 12:48:41.742515 774657 addons.go:479] Verifying addon ingress=true in "addons-662808"
I0407 12:48:41.742905 774657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (11.921016081s)
I0407 12:48:41.742971 774657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (11.719977403s)
I0407 12:48:41.742990 774657 addons.go:479] Verifying addon registry=true in "addons-662808"
I0407 12:48:41.743051 774657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (11.720690423s)
I0407 12:48:41.743235 774657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.822238012s)
I0407 12:48:41.743315 774657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.118369842s)
I0407 12:48:41.743935 774657 addons.go:479] Verifying addon metrics-server=true in "addons-662808"
I0407 12:48:41.743352 774657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.405175195s)
W0407 12:48:41.743986 774657 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0407 12:48:41.744005 774657 retry.go:31] will retry after 343.042144ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0407 12:48:41.744950 774657 out.go:177] * Verifying registry addon...
I0407 12:48:41.744987 774657 out.go:177] * Verifying ingress addon...
I0407 12:48:41.745784 774657 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-662808 service yakd-dashboard -n yakd-dashboard
I0407 12:48:41.748794 774657 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I0407 12:48:41.749969 774657 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
W0407 12:48:41.841743 774657 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
I0407 12:48:41.843394 774657 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I0407 12:48:41.843422 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:48:41.843593 774657 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0407 12:48:41.843608 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:48:42.087614 774657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0407 12:48:42.322570 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:48:42.423002 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:48:42.824705 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:48:42.825041 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:48:42.829958 774657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.708779578s)
I0407 12:48:42.830035 774657 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-662808"
I0407 12:48:42.830249 774657 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.874312621s)
I0407 12:48:42.831387 774657 out.go:177] * Verifying csi-hostpath-driver addon...
I0407 12:48:42.831475 774657 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
I0407 12:48:42.833086 774657 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
I0407 12:48:42.833426 774657 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0407 12:48:42.834745 774657 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0407 12:48:42.834771 774657 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0407 12:48:42.849546 774657 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0407 12:48:42.849571 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:48:42.937424 774657 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0407 12:48:42.937454 774657 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0407 12:48:43.028457 774657 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0407 12:48:43.028564 774657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I0407 12:48:43.128873 774657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0407 12:48:43.323172 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:48:43.323219 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:48:43.338255 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:48:43.744647 774657 pod_ready.go:103] pod "coredns-668d6bf9bc-2kx5j" in "kube-system" namespace has status "Ready":"False"
I0407 12:48:43.821756 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:48:43.821950 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:48:43.837149 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:48:44.252968 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:48:44.253317 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:48:44.338169 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:48:44.652098 774657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.56442326s)
I0407 12:48:44.652193 774657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.523276069s)
I0407 12:48:44.653201 774657 addons.go:479] Verifying addon gcp-auth=true in "addons-662808"
I0407 12:48:44.655840 774657 out.go:177] * Verifying gcp-auth addon...
I0407 12:48:44.657640 774657 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0407 12:48:44.722370 774657 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0407 12:48:44.823171 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:48:44.823203 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:48:44.836648 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:48:45.252258 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:48:45.252865 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:48:45.337725 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:48:45.752010 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:48:45.754343 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:48:45.837903 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:48:46.243554 774657 pod_ready.go:103] pod "coredns-668d6bf9bc-2kx5j" in "kube-system" namespace has status "Ready":"False"
I0407 12:48:46.252201 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:48:46.252843 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:48:46.337346 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:48:46.752367 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:48:46.752456 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:48:46.837133 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:48:47.252201 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:48:47.252550 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:48:47.336956 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:48:47.752087 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:48:47.752828 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:48:47.837318 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:48:48.251990 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:48:48.252674 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:48:48.337025 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:48:48.742794 774657 pod_ready.go:103] pod "coredns-668d6bf9bc-2kx5j" in "kube-system" namespace has status "Ready":"False"
I0407 12:48:48.751937 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:48:48.752091 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:48:48.837032 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:48:49.252377 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:48:49.252415 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:48:49.337650 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:48:49.752104 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:48:49.752842 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:48:49.837533 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:48:50.252574 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:48:50.252581 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:48:50.336525 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:48:50.743523 774657 pod_ready.go:103] pod "coredns-668d6bf9bc-2kx5j" in "kube-system" namespace has status "Ready":"False"
I0407 12:48:50.751894 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:48:50.752620 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:48:50.837065 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:48:51.251589 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:48:51.253601 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:48:51.337171 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:48:51.751518 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:48:51.753095 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:48:51.837877 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:48:52.252138 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:48:52.252638 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:48:52.337456 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:48:52.751721 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:48:52.752293 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:48:52.837574 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:48:53.243059 774657 pod_ready.go:103] pod "coredns-668d6bf9bc-2kx5j" in "kube-system" namespace has status "Ready":"False"
I0407 12:48:53.251848 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:48:53.252263 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:48:53.337808 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:48:53.751635 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:48:53.752523 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:48:53.838057 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:48:54.251756 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:48:54.252514 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:48:54.340507 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:48:54.751776 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:48:54.752769 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:48:54.836979 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:48:55.243393 774657 pod_ready.go:103] pod "coredns-668d6bf9bc-2kx5j" in "kube-system" namespace has status "Ready":"False"
I0407 12:48:55.251881 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:48:55.252778 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:48:55.336984 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:48:55.751888 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:48:55.752617 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:48:55.836592 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:48:56.251458 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:48:56.253073 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:48:56.337388 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:48:56.751843 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:48:56.752595 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:48:56.836930 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:48:57.243689 774657 pod_ready.go:103] pod "coredns-668d6bf9bc-2kx5j" in "kube-system" namespace has status "Ready":"False"
I0407 12:48:57.252411 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:48:57.252702 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:48:57.336702 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:48:57.806618 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:48:57.806767 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:48:57.837104 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:48:58.251786 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:48:58.252571 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:48:58.351561 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:48:58.752094 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:48:58.752884 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:48:58.837670 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:48:59.252538 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:48:59.252946 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:48:59.336736 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:48:59.743288 774657 pod_ready.go:103] pod "coredns-668d6bf9bc-2kx5j" in "kube-system" namespace has status "Ready":"False"
I0407 12:48:59.761515 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:48:59.761544 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:48:59.862687 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:00.251682 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:00.252650 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:49:00.337140 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:00.751253 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:00.752194 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:49:00.837536 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:01.251590 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:01.252436 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:49:01.338006 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:01.743579 774657 pod_ready.go:103] pod "coredns-668d6bf9bc-2kx5j" in "kube-system" namespace has status "Ready":"False"
I0407 12:49:01.752304 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:01.752946 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:49:01.837182 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:02.262809 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:02.262907 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:49:02.363250 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:02.752094 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:02.752608 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:49:02.836980 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:03.251641 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:03.252553 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:49:03.336739 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:03.743802 774657 pod_ready.go:103] pod "coredns-668d6bf9bc-2kx5j" in "kube-system" namespace has status "Ready":"False"
I0407 12:49:03.751341 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:03.752182 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:49:03.837256 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:04.251557 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:04.253087 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:49:04.337219 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:04.751990 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:04.752908 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0407 12:49:04.837150 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:05.285834 774657 kapi.go:107] duration metric: took 23.53585631s to wait for kubernetes.io/minikube-addons=registry ...
I0407 12:49:05.286004 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:05.363212 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:05.744239 774657 pod_ready.go:103] pod "coredns-668d6bf9bc-2kx5j" in "kube-system" namespace has status "Ready":"False"
I0407 12:49:05.751805 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:05.836744 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:06.252063 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:06.337490 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:06.751870 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:06.837016 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:07.251715 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:07.337460 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:07.745603 774657 pod_ready.go:103] pod "coredns-668d6bf9bc-2kx5j" in "kube-system" namespace has status "Ready":"False"
I0407 12:49:07.751972 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:07.837402 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:08.251800 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:08.337105 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:08.743698 774657 pod_ready.go:93] pod "coredns-668d6bf9bc-2kx5j" in "kube-system" namespace has status "Ready":"True"
I0407 12:49:08.743729 774657 pod_ready.go:82] duration metric: took 27.006069221s for pod "coredns-668d6bf9bc-2kx5j" in "kube-system" namespace to be "Ready" ...
I0407 12:49:08.743744 774657 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-75t2w" in "kube-system" namespace to be "Ready" ...
I0407 12:49:08.748097 774657 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-75t2w" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-75t2w" not found
I0407 12:49:08.748130 774657 pod_ready.go:82] duration metric: took 4.377554ms for pod "coredns-668d6bf9bc-75t2w" in "kube-system" namespace to be "Ready" ...
E0407 12:49:08.748145 774657 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-75t2w" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-75t2w" not found
I0407 12:49:08.748155 774657 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-662808" in "kube-system" namespace to be "Ready" ...
I0407 12:49:08.751285 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:08.752354 774657 pod_ready.go:93] pod "etcd-addons-662808" in "kube-system" namespace has status "Ready":"True"
I0407 12:49:08.752372 774657 pod_ready.go:82] duration metric: took 4.205493ms for pod "etcd-addons-662808" in "kube-system" namespace to be "Ready" ...
I0407 12:49:08.752384 774657 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-662808" in "kube-system" namespace to be "Ready" ...
I0407 12:49:08.756482 774657 pod_ready.go:93] pod "kube-apiserver-addons-662808" in "kube-system" namespace has status "Ready":"True"
I0407 12:49:08.756503 774657 pod_ready.go:82] duration metric: took 4.111433ms for pod "kube-apiserver-addons-662808" in "kube-system" namespace to be "Ready" ...
I0407 12:49:08.756516 774657 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-662808" in "kube-system" namespace to be "Ready" ...
I0407 12:49:08.760374 774657 pod_ready.go:93] pod "kube-controller-manager-addons-662808" in "kube-system" namespace has status "Ready":"True"
I0407 12:49:08.760391 774657 pod_ready.go:82] duration metric: took 3.867645ms for pod "kube-controller-manager-addons-662808" in "kube-system" namespace to be "Ready" ...
I0407 12:49:08.760401 774657 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cgdfz" in "kube-system" namespace to be "Ready" ...
I0407 12:49:08.837859 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:08.941453 774657 pod_ready.go:93] pod "kube-proxy-cgdfz" in "kube-system" namespace has status "Ready":"True"
I0407 12:49:08.941482 774657 pod_ready.go:82] duration metric: took 181.073388ms for pod "kube-proxy-cgdfz" in "kube-system" namespace to be "Ready" ...
I0407 12:49:08.941495 774657 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-662808" in "kube-system" namespace to be "Ready" ...
I0407 12:49:09.252581 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:09.337435 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:09.340616 774657 pod_ready.go:93] pod "kube-scheduler-addons-662808" in "kube-system" namespace has status "Ready":"True"
I0407 12:49:09.340645 774657 pod_ready.go:82] duration metric: took 399.138574ms for pod "kube-scheduler-addons-662808" in "kube-system" namespace to be "Ready" ...
I0407 12:49:09.340657 774657 pod_ready.go:39] duration metric: took 38.117808805s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0407 12:49:09.340687 774657 api_server.go:52] waiting for apiserver process to appear ...
I0407 12:49:09.340749 774657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0407 12:49:09.359683 774657 api_server.go:72] duration metric: took 40.733392159s to wait for apiserver process to appear ...
I0407 12:49:09.359712 774657 api_server.go:88] waiting for apiserver healthz status ...
I0407 12:49:09.359736 774657 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0407 12:49:09.363612 774657 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
ok
I0407 12:49:09.364678 774657 api_server.go:141] control plane version: v1.32.2
I0407 12:49:09.364705 774657 api_server.go:131] duration metric: took 4.98495ms to wait for apiserver health ...
I0407 12:49:09.364716 774657 system_pods.go:43] waiting for kube-system pods to appear ...
I0407 12:49:09.543246 774657 system_pods.go:59] 18 kube-system pods found
I0407 12:49:09.543301 774657 system_pods.go:61] "amd-gpu-device-plugin-l66rh" [a5fb26c5-73e6-4735-a212-e1b9c91e7d5c] Running
I0407 12:49:09.543312 774657 system_pods.go:61] "coredns-668d6bf9bc-2kx5j" [7871d918-36bc-48ba-988e-2f65e075c4b5] Running
I0407 12:49:09.543325 774657 system_pods.go:61] "csi-hostpath-attacher-0" [a9c263af-a860-4403-a74d-39a5679c372e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0407 12:49:09.543337 774657 system_pods.go:61] "csi-hostpath-resizer-0" [63ba8f75-61ff-4844-aef3-769cc7389f24] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0407 12:49:09.543349 774657 system_pods.go:61] "csi-hostpathplugin-5w4kl" [4c65ef1f-5f22-4b45-be30-ece32afb0e3a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0407 12:49:09.543357 774657 system_pods.go:61] "etcd-addons-662808" [37a14464-f055-4cd0-aab2-2e8c975e9f8e] Running
I0407 12:49:09.543361 774657 system_pods.go:61] "kube-apiserver-addons-662808" [53b37cc3-f3de-4773-bf32-e79021718953] Running
I0407 12:49:09.543365 774657 system_pods.go:61] "kube-controller-manager-addons-662808" [60ca4f96-9f7a-4e87-8c59-98b0face6e5e] Running
I0407 12:49:09.543373 774657 system_pods.go:61] "kube-ingress-dns-minikube" [cf9995a2-94ee-45b7-9333-083c10ffac79] Running
I0407 12:49:09.543376 774657 system_pods.go:61] "kube-proxy-cgdfz" [3e69be31-74a4-41e6-bde2-4615805d9512] Running
I0407 12:49:09.543379 774657 system_pods.go:61] "kube-scheduler-addons-662808" [393769aa-3d0f-46dd-870e-932a02635cae] Running
I0407 12:49:09.543383 774657 system_pods.go:61] "metrics-server-7fbb699795-5bqmp" [d86b42f2-cba5-4d53-8277-99e8dc49f20f] Running
I0407 12:49:09.543387 774657 system_pods.go:61] "nvidia-device-plugin-daemonset-rv6cl" [37f5078d-e3a7-43d5-a718-db741b45b741] Running
I0407 12:49:09.543390 774657 system_pods.go:61] "registry-6c88467877-g6r5h" [b30d0273-c82f-46a6-a761-fd905b1d3783] Running
I0407 12:49:09.543395 774657 system_pods.go:61] "registry-proxy-vqvmc" [9829d87d-bb8e-4c3d-b885-03deb72b4409] Running
I0407 12:49:09.543405 774657 system_pods.go:61] "snapshot-controller-68b874b76f-cv7cx" [8979a27d-2c7b-45cb-a9ef-57da407ff64f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0407 12:49:09.543416 774657 system_pods.go:61] "snapshot-controller-68b874b76f-g9ln7" [194e86dc-ac8e-4292-a7e9-e9f3ee215c9f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0407 12:49:09.543446 774657 system_pods.go:61] "storage-provisioner" [28fcbb16-5c69-498d-9882-b4a8c2ed606f] Running
I0407 12:49:09.543460 774657 system_pods.go:74] duration metric: took 178.736638ms to wait for pod list to return data ...
I0407 12:49:09.543473 774657 default_sa.go:34] waiting for default service account to be created ...
I0407 12:49:09.741969 774657 default_sa.go:45] found service account: "default"
I0407 12:49:09.742000 774657 default_sa.go:55] duration metric: took 198.519241ms for default service account to be created ...
I0407 12:49:09.742015 774657 system_pods.go:116] waiting for k8s-apps to be running ...
I0407 12:49:09.751754 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:09.837102 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:09.943070 774657 system_pods.go:86] 18 kube-system pods found
I0407 12:49:09.943110 774657 system_pods.go:89] "amd-gpu-device-plugin-l66rh" [a5fb26c5-73e6-4735-a212-e1b9c91e7d5c] Running
I0407 12:49:09.943120 774657 system_pods.go:89] "coredns-668d6bf9bc-2kx5j" [7871d918-36bc-48ba-988e-2f65e075c4b5] Running
I0407 12:49:09.943131 774657 system_pods.go:89] "csi-hostpath-attacher-0" [a9c263af-a860-4403-a74d-39a5679c372e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0407 12:49:09.943139 774657 system_pods.go:89] "csi-hostpath-resizer-0" [63ba8f75-61ff-4844-aef3-769cc7389f24] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0407 12:49:09.943154 774657 system_pods.go:89] "csi-hostpathplugin-5w4kl" [4c65ef1f-5f22-4b45-be30-ece32afb0e3a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0407 12:49:09.943162 774657 system_pods.go:89] "etcd-addons-662808" [37a14464-f055-4cd0-aab2-2e8c975e9f8e] Running
I0407 12:49:09.943168 774657 system_pods.go:89] "kube-apiserver-addons-662808" [53b37cc3-f3de-4773-bf32-e79021718953] Running
I0407 12:49:09.943176 774657 system_pods.go:89] "kube-controller-manager-addons-662808" [60ca4f96-9f7a-4e87-8c59-98b0face6e5e] Running
I0407 12:49:09.943183 774657 system_pods.go:89] "kube-ingress-dns-minikube" [cf9995a2-94ee-45b7-9333-083c10ffac79] Running
I0407 12:49:09.943190 774657 system_pods.go:89] "kube-proxy-cgdfz" [3e69be31-74a4-41e6-bde2-4615805d9512] Running
I0407 12:49:09.943195 774657 system_pods.go:89] "kube-scheduler-addons-662808" [393769aa-3d0f-46dd-870e-932a02635cae] Running
I0407 12:49:09.943203 774657 system_pods.go:89] "metrics-server-7fbb699795-5bqmp" [d86b42f2-cba5-4d53-8277-99e8dc49f20f] Running
I0407 12:49:09.943208 774657 system_pods.go:89] "nvidia-device-plugin-daemonset-rv6cl" [37f5078d-e3a7-43d5-a718-db741b45b741] Running
I0407 12:49:09.943216 774657 system_pods.go:89] "registry-6c88467877-g6r5h" [b30d0273-c82f-46a6-a761-fd905b1d3783] Running
I0407 12:49:09.943221 774657 system_pods.go:89] "registry-proxy-vqvmc" [9829d87d-bb8e-4c3d-b885-03deb72b4409] Running
I0407 12:49:09.943228 774657 system_pods.go:89] "snapshot-controller-68b874b76f-cv7cx" [8979a27d-2c7b-45cb-a9ef-57da407ff64f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0407 12:49:09.943238 774657 system_pods.go:89] "snapshot-controller-68b874b76f-g9ln7" [194e86dc-ac8e-4292-a7e9-e9f3ee215c9f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0407 12:49:09.943245 774657 system_pods.go:89] "storage-provisioner" [28fcbb16-5c69-498d-9882-b4a8c2ed606f] Running
I0407 12:49:09.943265 774657 system_pods.go:126] duration metric: took 201.241394ms to wait for k8s-apps to be running ...
I0407 12:49:09.943278 774657 system_svc.go:44] waiting for kubelet service to be running ....
I0407 12:49:09.943332 774657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0407 12:49:09.957589 774657 system_svc.go:56] duration metric: took 14.301888ms WaitForService to wait for kubelet
I0407 12:49:09.957626 774657 kubeadm.go:582] duration metric: took 41.331341917s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0407 12:49:09.957649 774657 node_conditions.go:102] verifying NodePressure condition ...
I0407 12:49:10.142195 774657 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I0407 12:49:10.142232 774657 node_conditions.go:123] node cpu capacity is 8
I0407 12:49:10.142257 774657 node_conditions.go:105] duration metric: took 184.601963ms to run NodePressure ...
I0407 12:49:10.142274 774657 start.go:241] waiting for startup goroutines ...
I0407 12:49:10.251805 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:10.337317 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:10.836426 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:10.838319 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:11.252328 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:11.337331 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:11.752587 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:11.837560 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:12.252521 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:12.337769 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:12.751847 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:12.836848 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:13.252001 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:13.337448 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:13.762614 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:13.863313 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:14.262294 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:14.337618 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:14.752649 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:14.837854 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:15.321423 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:15.422912 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:15.752197 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:15.837658 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:16.252874 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:16.336998 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:16.751880 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:16.837277 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:17.252436 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:17.337473 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:17.751589 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:17.837696 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:18.252018 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:18.337152 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:18.752450 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:18.837959 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:19.252528 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:19.337699 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:19.752271 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:19.837282 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:20.252511 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:20.338117 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:20.751876 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:20.837221 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:21.252417 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:21.337571 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:21.751914 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:21.848088 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:22.252905 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:22.337025 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:22.761893 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:22.862451 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:23.253121 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:23.337397 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:23.752481 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:23.837743 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:24.262361 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:24.337610 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:24.762452 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:24.863557 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:25.252282 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:25.337916 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:25.761505 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:25.861828 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:26.253805 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:26.354191 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:26.752652 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:26.837610 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:27.252601 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:27.338102 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:27.762166 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:27.837210 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:28.252238 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:28.337628 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0407 12:49:28.762917 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:28.836794 774657 kapi.go:107] duration metric: took 46.003363526s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0407 12:49:29.252517 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:29.752604 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:30.252015 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:30.751676 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:31.252261 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:31.762567 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:32.252369 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:32.752333 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:33.252523 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:33.752283 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:34.251909 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:34.752333 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:35.252570 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:35.762247 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:36.252317 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:36.751967 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:37.252405 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:37.752636 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:38.252234 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:38.752194 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:39.252652 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:39.752192 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:40.251849 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:40.752268 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:41.252433 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:41.821432 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:42.251942 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:42.751723 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:43.261945 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:43.751837 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:44.252652 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:44.752745 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:45.252022 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:45.752724 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:46.252901 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:46.752124 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:47.252473 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:47.789485 774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0407 12:49:48.251650 774657 kapi.go:107] duration metric: took 1m6.502850494s to wait for app.kubernetes.io/name=ingress-nginx ...
I0407 12:50:06.661919 774657 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0407 12:50:06.661948 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:07.161442 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:07.660889 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:08.161582 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:08.661411 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:09.160459 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:09.660399 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:10.160933 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:10.662236 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:11.160384 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:11.660884 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:12.161350 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:12.660769 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:13.160837 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:13.661048 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:14.161562 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:14.661202 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:15.160256 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:15.661200 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:16.160767 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:16.661428 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:17.160702 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:17.661145 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:18.161059 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:18.661353 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:19.161189 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:19.661236 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:20.160864 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:20.661743 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:21.160985 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:21.661870 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:22.161555 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:22.661251 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:23.160464 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:23.660820 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:24.161229 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:24.660379 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:25.160668 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:25.661174 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:26.160789 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:26.661587 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:27.161366 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:27.661492 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:28.161600 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:28.660961 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:29.161324 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:29.660635 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:30.161399 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:30.661563 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:31.160391 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:31.660840 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:32.161195 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:32.660944 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:33.160530 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:33.660530 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:34.161067 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:34.661758 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:35.160941 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:35.661576 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:36.160800 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:36.661497 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:37.160508 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:37.660606 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:38.161132 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:38.660633 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:39.160138 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:39.660396 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:40.160972 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:40.662981 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:41.161306 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:41.660435 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:42.160915 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:42.661901 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:43.161107 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:43.660353 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:44.160754 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:44.661315 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:45.160310 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:45.661862 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:46.161299 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:46.661148 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:47.160377 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:47.662488 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:48.161194 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:48.661732 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:49.160966 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:49.661261 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:50.161566 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:50.661288 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:51.160794 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:51.661542 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:52.161170 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:52.660886 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:53.161257 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:53.660554 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:54.161052 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:54.661530 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:55.161050 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:55.661779 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:56.161675 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:56.661349 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:57.160996 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:57.661219 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:58.160888 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:58.661028 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:59.161373 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:50:59.660512 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:51:00.161046 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:51:00.662237 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:51:01.160526 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:51:01.660686 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:51:02.161020 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:51:02.661919 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:51:03.161309 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:51:03.660462 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:51:04.161033 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:51:04.661662 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:51:05.160847 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:51:05.661936 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:51:06.161570 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:51:06.660921 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:51:07.161407 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:51:07.660641 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:51:08.161305 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:51:08.660457 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:51:09.160530 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:51:09.660427 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:51:10.160690 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:51:10.661763 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:51:11.161254 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:51:11.661747 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:51:12.161895 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:51:12.661117 774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0407 12:51:13.176866 774657 kapi.go:107] duration metric: took 2m28.519220815s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0407 12:51:13.178364 774657 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-662808 cluster.
I0407 12:51:13.179475 774657 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0407 12:51:13.180587 774657 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0407 12:51:13.181736 774657 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, amd-gpu-device-plugin, volcano, cloud-spanner, nvidia-device-plugin, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
I0407 12:51:13.182763 774657 addons.go:514] duration metric: took 2m44.556414011s for enable addons: enabled=[ingress-dns storage-provisioner amd-gpu-device-plugin volcano cloud-spanner nvidia-device-plugin inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
I0407 12:51:13.182817 774657 start.go:246] waiting for cluster config update ...
I0407 12:51:13.182841 774657 start.go:255] writing updated cluster config ...
I0407 12:51:13.183549 774657 ssh_runner.go:195] Run: rm -f paused
I0407 12:51:13.236912 774657 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
I0407 12:51:13.238540 774657 out.go:177] * Done! kubectl is now configured to use "addons-662808" cluster and "default" namespace by default
==> Docker <==
Apr 07 12:52:46 addons-662808 dockerd[1440]: time="2025-04-07T12:52:46.332822653Z" level=info msg="ignoring event" container=86370591ab32ac046883a9c1ed4c71092f44f387ff0ba3c031030ebca25cd94f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 07 12:52:46 addons-662808 dockerd[1440]: time="2025-04-07T12:52:46.345587296Z" level=info msg="ignoring event" container=cd80a0757ba85ba7b309a52568ae874f5a28ed00add028bcefe449245af54ef3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 07 12:52:46 addons-662808 dockerd[1440]: time="2025-04-07T12:52:46.345640461Z" level=info msg="ignoring event" container=662cfd47ac3fb6ed3edd5ce9800afc9c08ea202df6bd19abe6a80d88eb0d079d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 07 12:52:46 addons-662808 dockerd[1440]: time="2025-04-07T12:52:46.350910260Z" level=info msg="ignoring event" container=76f9d40c6f8e8f7786d9b2f1218c79aeb61455d0f4d2bbe462367b4f3eef5b31 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 07 12:52:46 addons-662808 dockerd[1440]: time="2025-04-07T12:52:46.425858525Z" level=info msg="ignoring event" container=998ede9839c047880f9d8cafba5aca9ff72ba878c02e5748dc30485d2dc35de9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 07 12:52:46 addons-662808 dockerd[1440]: time="2025-04-07T12:52:46.629788278Z" level=info msg="ignoring event" container=cc9ed7e51f74421296d40390cc7a6667df88f250c56d5d4c10d03a76a65cf670 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 07 12:52:46 addons-662808 dockerd[1440]: time="2025-04-07T12:52:46.653363700Z" level=info msg="ignoring event" container=e20a677a788537b1fb642a36089e395042045741eb6d130d1809dc26744cb8e2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 07 12:52:46 addons-662808 cri-dockerd[1743]: time="2025-04-07T12:52:46Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"csi-hostpath-resizer-0_kube-system\": unexpected command output nsenter: cannot open /proc/6029/ns/net: No such file or directory\n with error: exit status 1"
Apr 07 12:52:46 addons-662808 dockerd[1440]: time="2025-04-07T12:52:46.731600659Z" level=info msg="ignoring event" container=00d88fae1d6a7f2b10882def30670ac5391f57c77edd28865bd4ed74ce4ecff9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 07 12:52:50 addons-662808 dockerd[1440]: time="2025-04-07T12:52:50.351915768Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Apr 07 12:52:50 addons-662808 dockerd[1440]: time="2025-04-07T12:52:50.354061040Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Apr 07 12:52:50 addons-662808 dockerd[1440]: time="2025-04-07T12:52:50.482128979Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Apr 07 12:52:50 addons-662808 dockerd[1440]: time="2025-04-07T12:52:50.484284706Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Apr 07 12:53:17 addons-662808 dockerd[1440]: time="2025-04-07T12:53:17.358469673Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Apr 07 12:53:17 addons-662808 dockerd[1440]: time="2025-04-07T12:53:17.360076627Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Apr 07 12:53:20 addons-662808 dockerd[1440]: time="2025-04-07T12:53:20.408326435Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Apr 07 12:53:20 addons-662808 cri-dockerd[1743]: time="2025-04-07T12:53:20Z" level=info msg="Stop pulling image docker.io/kicbase/echo-server:1.0: 1.0: Pulling from kicbase/echo-server"
Apr 07 12:53:25 addons-662808 cri-dockerd[1743]: time="2025-04-07T12:53:25Z" level=error msg="error getting RW layer size for container ID 'c282c016006910b62bda0d3a5c34b1ad120a2cd6dfd5198ad8ea1f1a3ac5f8a8': Error response from daemon: No such container: c282c016006910b62bda0d3a5c34b1ad120a2cd6dfd5198ad8ea1f1a3ac5f8a8"
Apr 07 12:53:25 addons-662808 cri-dockerd[1743]: time="2025-04-07T12:53:25Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'c282c016006910b62bda0d3a5c34b1ad120a2cd6dfd5198ad8ea1f1a3ac5f8a8'"
Apr 07 12:53:59 addons-662808 dockerd[1440]: time="2025-04-07T12:53:59.349578742Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Apr 07 12:53:59 addons-662808 dockerd[1440]: time="2025-04-07T12:53:59.351294002Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Apr 07 12:54:12 addons-662808 dockerd[1440]: time="2025-04-07T12:54:12.363934612Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Apr 07 12:54:12 addons-662808 dockerd[1440]: time="2025-04-07T12:54:12.365801935Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Apr 07 12:55:28 addons-662808 dockerd[1440]: time="2025-04-07T12:55:28.452081720Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Apr 07 12:55:28 addons-662808 cri-dockerd[1743]: time="2025-04-07T12:55:28Z" level=info msg="Stop pulling image busybox:stable: stable: Pulling from library/busybox"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
91847463ff11d nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591 3 minutes ago Running nginx 0 53e4d3f317689 nginx
432e12c3d133a gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 3 minutes ago Running busybox 0 aa8a092293397 busybox
ce4342824436c rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246 6 minutes ago Running local-path-provisioner 0 809daafacf09a local-path-provisioner-76f89f99b5-zz5vr
8453eafd0d8b9 6e38f40d628db 7 minutes ago Running storage-provisioner 0 1514be81e6c23 storage-provisioner
896abc0e03ea1 c69fa2e9cbf5f 7 minutes ago Running coredns 0 cb2349542b905 coredns-668d6bf9bc-2kx5j
6dec7c27faa4d f1332858868e1 7 minutes ago Running kube-proxy 0 74eaa8b89e907 kube-proxy-cgdfz
920768df661bd a9e7e6b294baf 7 minutes ago Running etcd 0 a37ca73db4ae0 etcd-addons-662808
853e5bbacf4c9 85b7a174738ba 7 minutes ago Running kube-apiserver 0 1923f5b1ef964 kube-apiserver-addons-662808
18ae4fce94233 d8e673e7c9983 7 minutes ago Running kube-scheduler 0 3e55a61a516f6 kube-scheduler-addons-662808
b0deb3bc6ed6e b6a454c5a800d 7 minutes ago Running kube-controller-manager 0 cf289b2d28a3d kube-controller-manager-addons-662808
==> coredns [896abc0e03ea] <==
[INFO] 10.244.0.23:40289 - 26455 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.006207103s
[INFO] 10.244.0.23:52764 - 22657 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00557717s
[INFO] 10.244.0.23:34767 - 40349 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005987241s
[INFO] 10.244.0.23:52230 - 5890 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004366156s
[INFO] 10.244.0.23:40289 - 4528 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004304377s
[INFO] 10.244.0.23:54802 - 10025 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00481132s
[INFO] 10.244.0.23:40716 - 62638 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006114344s
[INFO] 10.244.0.23:59305 - 18490 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004570626s
[INFO] 10.244.0.23:43430 - 16331 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00731684s
[INFO] 10.244.0.23:34767 - 22823 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005740799s
[INFO] 10.244.0.23:59305 - 53411 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005722537s
[INFO] 10.244.0.23:54802 - 6185 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004851224s
[INFO] 10.244.0.23:52230 - 19158 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005498315s
[INFO] 10.244.0.23:43430 - 44683 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005207469s
[INFO] 10.244.0.23:40716 - 55430 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005081415s
[INFO] 10.244.0.23:52764 - 18902 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00613158s
[INFO] 10.244.0.23:43430 - 25725 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000136193s
[INFO] 10.244.0.23:59305 - 47013 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000119123s
[INFO] 10.244.0.23:52230 - 54068 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000072255s
[INFO] 10.244.0.23:34767 - 22490 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000053071s
[INFO] 10.244.0.23:52764 - 19646 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000049646s
[INFO] 10.244.0.23:40289 - 37858 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.041211359s
[INFO] 10.244.0.23:40716 - 39367 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000207832s
[INFO] 10.244.0.23:54802 - 25291 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000163712s
[INFO] 10.244.0.23:40289 - 22018 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000128136s
==> describe nodes <==
Name: addons-662808
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-662808
kubernetes.io/os=linux
minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277
minikube.k8s.io/name=addons-662808
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_04_07T12_48_23_0700
minikube.k8s.io/version=v1.35.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-662808
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 07 Apr 2025 12:48:21 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-662808
AcquireTime: <unset>
RenewTime: Mon, 07 Apr 2025 12:55:32 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 07 Apr 2025 12:52:59 +0000 Mon, 07 Apr 2025 12:48:19 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 07 Apr 2025 12:52:59 +0000 Mon, 07 Apr 2025 12:48:19 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 07 Apr 2025 12:52:59 +0000 Mon, 07 Apr 2025 12:48:19 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 07 Apr 2025 12:52:59 +0000 Mon, 07 Apr 2025 12:48:21 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: addons-662808
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859364Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859364Ki
pods: 110
System Info:
Machine ID: 507234e98f564487b1aa1c00ea17aac2
System UUID: 55ac8795-6631-4183-b1ad-654ae3cdc752
Boot ID: 1751ef18-988c-47e7-9c05-4bbf13b6e72b
Kernel Version: 5.15.0-1078-gcp
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://28.0.4
Kubelet Version: v1.32.2
Kube-Proxy Version: v1.32.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m45s
default hello-world-app-7d9564db4-rps6j 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m4s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m13s
default test-local-path 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m6s
kube-system coredns-668d6bf9bc-2kx5j 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 7m10s
kube-system etcd-addons-662808 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 7m15s
kube-system kube-apiserver-addons-662808 250m (3%) 0 (0%) 0 (0%) 0 (0%) 7m15s
kube-system kube-controller-manager-addons-662808 200m (2%) 0 (0%) 0 (0%) 0 (0%) 7m15s
kube-system kube-proxy-cgdfz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m10s
kube-system kube-scheduler-addons-662808 100m (1%) 0 (0%) 0 (0%) 0 (0%) 7m15s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m5s
local-path-storage local-path-provisioner-76f89f99b5-zz5vr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m5s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (9%) 0 (0%)
memory 170Mi (0%) 170Mi (0%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 7m5s kube-proxy
Normal Starting 7m15s kubelet Starting kubelet.
Warning CgroupV1 7m15s kubelet cgroup v1 support is in maintenance mode, please migrate to cgroup v2
Normal NodeAllocatableEnforced 7m15s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 7m15s kubelet Node addons-662808 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 7m15s kubelet Node addons-662808 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 7m15s kubelet Node addons-662808 status is now: NodeHasSufficientPID
Normal RegisteredNode 7m11s node-controller Node addons-662808 event: Registered Node addons-662808 in Controller
==> dmesg <==
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 65 55 c9 33 59 08 06
[ +0.141473] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 bf 1f c7 ae 9e 08 06
[ +21.609237] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 d5 9a 49 e1 56 08 06
[ +0.000651] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 e3 ca 1b 7f d5 08 06
[Apr 7 12:50] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 30 e1 9f c6 d3 08 06
[ +0.097803] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 81 9a df 00 56 08 06
[Apr 7 12:51] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff de 1f 16 4b 47 75 08 06
[ +0.000518] IPv4: martian source 10.244.0.27 from 10.244.0.2, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 2e a6 82 33 b5 99 08 06
[Apr 7 12:52] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
[ +0.000010] ll header: 00000000: ff ff ff ff ff ff 46 4c 24 4e 2e 75 08 06
[ +0.000501] IPv4: martian source 10.244.0.32 from 10.244.0.2, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 2e a6 82 33 b5 99 08 06
[ +0.000635] IPv4: martian source 10.244.0.32 from 10.244.0.9, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff d6 4f 27 69 51 39 08 06
[ +12.201481] IPv4: martian source 10.244.0.33 from 10.244.0.23, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 d5 9a 49 e1 56 08 06
[ +0.317597] IPv4: martian source 10.244.0.23 from 10.244.0.2, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 2e a6 82 33 b5 99 08 06
==> etcd [920768df661b] <==
{"level":"info","ts":"2025-04-07T12:48:19.041762Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2025-04-07T12:48:19.041791Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.49.2:2380"}
{"level":"info","ts":"2025-04-07T12:48:19.629637Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
{"level":"info","ts":"2025-04-07T12:48:19.629683Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
{"level":"info","ts":"2025-04-07T12:48:19.629698Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
{"level":"info","ts":"2025-04-07T12:48:19.629761Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
{"level":"info","ts":"2025-04-07T12:48:19.629774Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
{"level":"info","ts":"2025-04-07T12:48:19.629784Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
{"level":"info","ts":"2025-04-07T12:48:19.629793Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
{"level":"info","ts":"2025-04-07T12:48:19.630728Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-662808 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
{"level":"info","ts":"2025-04-07T12:48:19.630724Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2025-04-07T12:48:19.630737Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2025-04-07T12:48:19.630749Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2025-04-07T12:48:19.631086Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2025-04-07T12:48:19.631113Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2025-04-07T12:48:19.631265Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
{"level":"info","ts":"2025-04-07T12:48:19.631343Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2025-04-07T12:48:19.631370Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2025-04-07T12:48:19.631752Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2025-04-07T12:48:19.631800Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2025-04-07T12:48:19.632483Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2025-04-07T12:48:19.632485Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
{"level":"info","ts":"2025-04-07T12:48:32.220218Z","caller":"traceutil/trace.go:171","msg":"trace[2071743628] transaction","detail":"{read_only:false; response_revision:392; number_of_response:1; }","duration":"100.382692ms","start":"2025-04-07T12:48:32.119814Z","end":"2025-04-07T12:48:32.220197Z","steps":["trace[2071743628] 'process raft request' (duration: 21.015821ms)","trace[2071743628] 'compare' (duration: 78.799843ms)"],"step_count":2}
{"level":"info","ts":"2025-04-07T12:48:32.220453Z","caller":"traceutil/trace.go:171","msg":"trace[20732756] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"100.569047ms","start":"2025-04-07T12:48:32.119875Z","end":"2025-04-07T12:48:32.220444Z","steps":["trace[20732756] 'process raft request' (duration: 99.859087ms)"],"step_count":1}
{"level":"info","ts":"2025-04-07T12:48:32.220540Z","caller":"traceutil/trace.go:171","msg":"trace[862675044] transaction","detail":"{read_only:false; response_revision:394; number_of_response:1; }","duration":"100.364164ms","start":"2025-04-07T12:48:32.120170Z","end":"2025-04-07T12:48:32.220534Z","steps":["trace[862675044] 'process raft request' (duration: 99.618527ms)"],"step_count":1}
==> kernel <==
12:55:38 up 20:38, 0 users, load average: 0.83, 0.68, 0.53
Linux addons-662808 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kube-apiserver [853e5bbacf4c] <==
W0407 12:51:44.625398 1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
W0407 12:51:45.066978 1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
E0407 12:52:01.283207 1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:41846: use of closed network connection
E0407 12:52:01.461883 1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:41870: use of closed network connection
I0407 12:52:10.943381 1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.105.33"}
I0407 12:52:22.654344 1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
I0407 12:52:24.978052 1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
I0407 12:52:25.174810 1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.149.170"}
I0407 12:52:27.263039 1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
W0407 12:52:28.430617 1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
I0407 12:52:34.687885 1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.103.112"}
I0407 12:52:45.114334 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0407 12:52:45.114390 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0407 12:52:45.127046 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0407 12:52:45.127100 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0407 12:52:45.128585 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0407 12:52:45.128643 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0407 12:52:45.141853 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0407 12:52:45.141917 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0407 12:52:45.252291 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0407 12:52:45.252336 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W0407 12:52:46.129357 1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W0407 12:52:46.252830 1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
W0407 12:52:46.264675 1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
I0407 12:53:08.037599 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
==> kube-controller-manager [b0deb3bc6ed6] <==
E0407 12:55:18.940353 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0407 12:55:20.744770 1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
E0407 12:55:20.747841 1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
W0407 12:55:20.748744 1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0407 12:55:20.748792 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0407 12:55:24.607684 1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
E0407 12:55:24.608767 1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
W0407 12:55:24.609581 1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0407 12:55:24.609611 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0407 12:55:26.221673 1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
E0407 12:55:26.222698 1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
W0407 12:55:26.223670 1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0407 12:55:26.223711 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0407 12:55:30.039758 1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
E0407 12:55:30.040722 1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="bus.volcano.sh/v1alpha1, Resource=commands"
W0407 12:55:30.041612 1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0407 12:55:30.041654 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0407 12:55:35.086184 1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
E0407 12:55:35.087109 1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="scheduling.volcano.sh/v1beta1, Resource=queues"
W0407 12:55:35.088030 1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0407 12:55:35.088066 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0407 12:55:35.331332 1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
E0407 12:55:35.332328 1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="scheduling.volcano.sh/v1beta1, Resource=podgroups"
W0407 12:55:35.333301 1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0407 12:55:35.333335 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
==> kube-proxy [6dec7c27faa4] <==
I0407 12:48:31.825911 1 server_linux.go:66] "Using iptables proxy"
I0407 12:48:32.432388 1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
E0407 12:48:32.432498 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0407 12:48:32.721647 1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0407 12:48:32.721717 1 server_linux.go:170] "Using iptables Proxier"
I0407 12:48:32.729212 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0407 12:48:32.737479 1 server.go:497] "Version info" version="v1.32.2"
I0407 12:48:32.737521 1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0407 12:48:32.744191 1 config.go:199] "Starting service config controller"
I0407 12:48:32.824043 1 shared_informer.go:313] Waiting for caches to sync for service config
I0407 12:48:32.821324 1 config.go:105] "Starting endpoint slice config controller"
I0407 12:48:32.824089 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0407 12:48:32.821803 1 config.go:329] "Starting node config controller"
I0407 12:48:32.824100 1 shared_informer.go:313] Waiting for caches to sync for node config
I0407 12:48:32.924577 1 shared_informer.go:320] Caches are synced for node config
I0407 12:48:32.924618 1 shared_informer.go:320] Caches are synced for service config
I0407 12:48:32.924631 1 shared_informer.go:320] Caches are synced for endpoint slice config
==> kube-scheduler [18ae4fce9423] <==
E0407 12:48:20.942096 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
E0407 12:48:20.942098 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0407 12:48:20.942127 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0407 12:48:20.942143 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0407 12:48:20.942432 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0407 12:48:20.942520 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
E0407 12:48:20.942568 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
E0407 12:48:20.942517 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0407 12:48:20.943345 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0407 12:48:20.943377 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0407 12:48:20.943610 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0407 12:48:20.943639 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0407 12:48:21.807656 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0407 12:48:21.807698 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0407 12:48:21.847602 1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0407 12:48:21.847664 1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
W0407 12:48:21.923314 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0407 12:48:21.923357 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0407 12:48:21.966324 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0407 12:48:21.966388 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0407 12:48:22.003874 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0407 12:48:22.003912 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0407 12:48:22.019121 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0407 12:48:22.019164 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
I0407 12:48:23.938749 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Apr 07 12:53:59 addons-662808 kubelet[2632]: E0407 12:53:59.351841 2632 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
Apr 07 12:53:59 addons-662808 kubelet[2632]: E0407 12:53:59.351919 2632 kuberuntime_image.go:55] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
Apr 07 12:53:59 addons-662808 kubelet[2632]: E0407 12:53:59.352097 2632 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:busybox,Image:busybox:stable,Command:[sh -c echo 'local-path-provisioner' > /test/file1],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:data,ReadOnly:false,MountPath:/test,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5ffsn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil
,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-local-path_default(d80409e4-1900-4a8f-9c48-4e8e81479f9a): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
Apr 07 12:53:59 addons-662808 kubelet[2632]: E0407 12:53:59.353239 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="d80409e4-1900-4a8f-9c48-4e8e81479f9a"
Apr 07 12:54:10 addons-662808 kubelet[2632]: E0407 12:54:10.232947 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="d80409e4-1900-4a8f-9c48-4e8e81479f9a"
Apr 07 12:54:12 addons-662808 kubelet[2632]: E0407 12:54:12.366324 2632 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kicbase/echo-server:1.0"
Apr 07 12:54:12 addons-662808 kubelet[2632]: E0407 12:54:12.366385 2632 kuberuntime_image.go:55] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kicbase/echo-server:1.0"
Apr 07 12:54:12 addons-662808 kubelet[2632]: E0407 12:54:12.366488 2632 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:hello-world-app,Image:docker.io/kicbase/echo-server:1.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9qzp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start fai
led in pod hello-world-app-7d9564db4-rps6j_default(5e3c9230-c6e8-4e0b-babf-9ce5ce906846): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
Apr 07 12:54:12 addons-662808 kubelet[2632]: E0407 12:54:12.367673 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-7d9564db4-rps6j" podUID="5e3c9230-c6e8-4e0b-babf-9ce5ce906846"
Apr 07 12:54:21 addons-662808 kubelet[2632]: E0407 12:54:21.233719 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="d80409e4-1900-4a8f-9c48-4e8e81479f9a"
Apr 07 12:54:25 addons-662808 kubelet[2632]: E0407 12:54:25.233151 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-7d9564db4-rps6j" podUID="5e3c9230-c6e8-4e0b-babf-9ce5ce906846"
Apr 07 12:54:28 addons-662808 kubelet[2632]: I0407 12:54:28.230947 2632 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
Apr 07 12:54:33 addons-662808 kubelet[2632]: E0407 12:54:33.232761 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="d80409e4-1900-4a8f-9c48-4e8e81479f9a"
Apr 07 12:54:38 addons-662808 kubelet[2632]: E0407 12:54:38.233270 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-7d9564db4-rps6j" podUID="5e3c9230-c6e8-4e0b-babf-9ce5ce906846"
Apr 07 12:54:45 addons-662808 kubelet[2632]: E0407 12:54:45.232926 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="d80409e4-1900-4a8f-9c48-4e8e81479f9a"
Apr 07 12:54:51 addons-662808 kubelet[2632]: E0407 12:54:51.232820 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-7d9564db4-rps6j" podUID="5e3c9230-c6e8-4e0b-babf-9ce5ce906846"
Apr 07 12:54:59 addons-662808 kubelet[2632]: E0407 12:54:59.233468 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="d80409e4-1900-4a8f-9c48-4e8e81479f9a"
Apr 07 12:55:05 addons-662808 kubelet[2632]: E0407 12:55:05.232994 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-7d9564db4-rps6j" podUID="5e3c9230-c6e8-4e0b-babf-9ce5ce906846"
Apr 07 12:55:13 addons-662808 kubelet[2632]: E0407 12:55:13.233506 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="d80409e4-1900-4a8f-9c48-4e8e81479f9a"
Apr 07 12:55:16 addons-662808 kubelet[2632]: E0407 12:55:16.233292 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-7d9564db4-rps6j" podUID="5e3c9230-c6e8-4e0b-babf-9ce5ce906846"
Apr 07 12:55:27 addons-662808 kubelet[2632]: E0407 12:55:27.233521 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-7d9564db4-rps6j" podUID="5e3c9230-c6e8-4e0b-babf-9ce5ce906846"
Apr 07 12:55:28 addons-662808 kubelet[2632]: E0407 12:55:28.454603 2632 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
Apr 07 12:55:28 addons-662808 kubelet[2632]: E0407 12:55:28.454657 2632 kuberuntime_image.go:55] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
Apr 07 12:55:28 addons-662808 kubelet[2632]: E0407 12:55:28.454778 2632 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:busybox,Image:busybox:stable,Command:[sh -c echo 'local-path-provisioner' > /test/file1],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:data,ReadOnly:false,MountPath:/test,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5ffsn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil
,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-local-path_default(d80409e4-1900-4a8f-9c48-4e8e81479f9a): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
Apr 07 12:55:28 addons-662808 kubelet[2632]: E0407 12:55:28.455958 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="d80409e4-1900-4a8f-9c48-4e8e81479f9a"
==> storage-provisioner [8453eafd0d8b] <==
I0407 12:48:37.129932 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0407 12:48:37.226583 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0407 12:48:37.226661 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0407 12:48:37.322710 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0407 12:48:37.323158 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-662808_6facb6dc-a95d-4e49-9932-41d1bf4bf1b9!
I0407 12:48:37.324247 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"21766113-f71c-47e8-a214-f06f7579e823", APIVersion:"v1", ResourceVersion:"604", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-662808_6facb6dc-a95d-4e49-9932-41d1bf4bf1b9 became leader
I0407 12:48:37.424128 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-662808_6facb6dc-a95d-4e49-9932-41d1bf4bf1b9!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-662808 -n addons-662808
helpers_test.go:261: (dbg) Run: kubectl --context addons-662808 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-7d9564db4-rps6j test-local-path
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context addons-662808 describe pod hello-world-app-7d9564db4-rps6j test-local-path
helpers_test.go:282: (dbg) kubectl --context addons-662808 describe pod hello-world-app-7d9564db4-rps6j test-local-path:
-- stdout --
Name: hello-world-app-7d9564db4-rps6j
Namespace: default
Priority: 0
Service Account: default
Node: addons-662808/192.168.49.2
Start Time: Mon, 07 Apr 2025 12:52:34 +0000
Labels: app=hello-world-app
pod-template-hash=7d9564db4
Annotations: <none>
Status: Pending
IP: 10.244.0.35
IPs:
IP: 10.244.0.35
Controlled By: ReplicaSet/hello-world-app-7d9564db4
Containers:
hello-world-app:
Container ID:
Image: docker.io/kicbase/echo-server:1.0
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9qzp7 (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-9qzp7:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m5s default-scheduler Successfully assigned default/hello-world-app-7d9564db4-rps6j to addons-662808
Warning Failed 2m19s (x2 over 3m4s) kubelet Failed to pull image "docker.io/kicbase/echo-server:1.0": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal Pulling 87s (x4 over 3m4s) kubelet Pulling image "docker.io/kicbase/echo-server:1.0"
Warning Failed 87s (x4 over 3m4s) kubelet Error: ErrImagePull
Warning Failed 87s (x2 over 2m49s) kubelet Failed to pull image "docker.io/kicbase/echo-server:1.0": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal BackOff 12s (x12 over 3m3s) kubelet Back-off pulling image "docker.io/kicbase/echo-server:1.0"
Warning Failed 12s (x12 over 3m3s) kubelet Error: ImagePullBackOff
Name: test-local-path
Namespace: default
Priority: 0
Service Account: default
Node: addons-662808/192.168.49.2
Start Time: Mon, 07 Apr 2025 12:52:36 +0000
Labels: run=test-local-path
Annotations: <none>
Status: Pending
IP: 10.244.0.36
IPs:
IP: 10.244.0.36
Containers:
busybox:
Container ID:
Image: busybox:stable
Image ID:
Port: <none>
Host Port: <none>
Command:
sh
-c
echo 'local-path-provisioner' > /test/file1
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/test from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5ffsn (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: test-pvc
ReadOnly: false
kube-api-access-5ffsn:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m3s default-scheduler Successfully assigned default/test-local-path to addons-662808
Warning Failed 100s (x4 over 3m2s) kubelet Failed to pull image "busybox:stable": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal BackOff 26s (x11 over 3m1s) kubelet Back-off pulling image "busybox:stable"
Warning Failed 26s (x11 over 3m1s) kubelet Error: ImagePullBackOff
Normal Pulling 11s (x5 over 3m2s) kubelet Pulling image "busybox:stable"
Warning Failed 11s (x5 over 3m2s) kubelet Error: ErrImagePull
Warning Failed 11s kubelet Failed to pull image "busybox:stable": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/LocalPath FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:992: (dbg) Run: out/minikube-linux-amd64 -p addons-662808 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-662808 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.92457104s)
--- FAIL: TestAddons/parallel/LocalPath (229.37s)