=== RUN TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath
=== CONT TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run: kubectl --context addons-444927 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run: kubectl --context addons-444927 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run: kubectl --context addons-444927 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run: kubectl --context addons-444927 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run: kubectl --context addons-444927 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run: kubectl --context addons-444927 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run: kubectl --context addons-444927 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run: kubectl --context addons-444927 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [99e0a41e-dea7-4fc3-a083-fa0680179d33] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:329: TestAddons/parallel/LocalPath: WARNING: pod list for "default" "run=test-local-path" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:901: ***** TestAddons/parallel/LocalPath: pod "run=test-local-path" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:901: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-444927 -n addons-444927
addons_test.go:901: TestAddons/parallel/LocalPath: showing logs for failed pods as of 2025-02-10 12:39:01.99796146 +0000 UTC m=+401.155128614
addons_test.go:901: (dbg) Run: kubectl --context addons-444927 describe po test-local-path -n default
addons_test.go:901: (dbg) kubectl --context addons-444927 describe po test-local-path -n default:
Name: test-local-path
Namespace: default
Priority: 0
Service Account: default
Node: addons-444927/192.168.49.2
Start Time: Mon, 10 Feb 2025 12:36:01 +0000
Labels: run=test-local-path
Annotations: <none>
Status: Pending
IP: 10.244.0.32
IPs:
IP: 10.244.0.32
Containers:
busybox:
Container ID:
Image: busybox:stable
Image ID:
Port: <none>
Host Port: <none>
Command:
sh
-c
echo 'local-path-provisioner' > /test/file1
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/test from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qvtsj (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: test-pvc
ReadOnly: false
kube-api-access-qvtsj:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m1s default-scheduler Successfully assigned default/test-local-path to addons-444927
Warning Failed 2m59s kubelet Failed to pull image "busybox:stable": failed to pull and unpack image "docker.io/library/busybox:stable": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:afa67e3cea50ce204060a6c0113bd63cb289cc0f555d5a80a3bb7c0f62b95add: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal Pulling 84s (x4 over 3m) kubelet Pulling image "busybox:stable"
Warning Failed 83s (x4 over 2m59s) kubelet Error: ErrImagePull
Warning Failed 83s (x3 over 2m44s) kubelet Failed to pull image "busybox:stable": failed to pull and unpack image "docker.io/library/busybox:stable": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:71b79694b71639e633452f57fd9de40595d524de308349218d9a6a144b40be02: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal BackOff 4s (x11 over 2m59s) kubelet Back-off pulling image "busybox:stable"
Warning Failed 4s (x11 over 2m59s) kubelet Error: ImagePullBackOff
addons_test.go:901: (dbg) Run: kubectl --context addons-444927 logs test-local-path -n default
addons_test.go:901: (dbg) Non-zero exit: kubectl --context addons-444927 logs test-local-path -n default: exit status 1 (66.292909ms)
** stderr **
Error from server (BadRequest): container "busybox" in pod "test-local-path" is waiting to start: trying and failing to pull image
** /stderr **
addons_test.go:901: kubectl --context addons-444927 logs test-local-path -n default: exit status 1
addons_test.go:902: failed waiting for test-local-path pod: run=test-local-path within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestAddons/parallel/LocalPath]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect addons-444927
helpers_test.go:235: (dbg) docker inspect addons-444927:
-- stdout --
[
{
"Id": "0392e5be00559452610f48922afb15fc3bf3718238ee5a2750b0b0817e743808",
"Created": "2025-02-10T12:32:58.523536679Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 80410,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-02-10T12:32:58.635075213Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:e72c4cbe9b296d8a58fbcae1a7b969fa1cee662cd7b86f2d4efc5e146519cf0a",
"ResolvConfPath": "/var/lib/docker/containers/0392e5be00559452610f48922afb15fc3bf3718238ee5a2750b0b0817e743808/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/0392e5be00559452610f48922afb15fc3bf3718238ee5a2750b0b0817e743808/hostname",
"HostsPath": "/var/lib/docker/containers/0392e5be00559452610f48922afb15fc3bf3718238ee5a2750b0b0817e743808/hosts",
"LogPath": "/var/lib/docker/containers/0392e5be00559452610f48922afb15fc3bf3718238ee5a2750b0b0817e743808/0392e5be00559452610f48922afb15fc3bf3718238ee5a2750b0b0817e743808-json.log",
"Name": "/addons-444927",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"addons-444927:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {
"max-size": "100m"
}
},
"NetworkMode": "addons-444927",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4194304000,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8388608000,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/83d95d0b735939a4a29b22a04372394207a6bc27b80ad4a75ca151335b6b2534-init/diff:/var/lib/docker/overlay2/9ffca27f7ebed742e3d0dd8f2061c1044c6b8fc8f60ace2c8ab1f353604acf23/diff",
"MergedDir": "/var/lib/docker/overlay2/83d95d0b735939a4a29b22a04372394207a6bc27b80ad4a75ca151335b6b2534/merged",
"UpperDir": "/var/lib/docker/overlay2/83d95d0b735939a4a29b22a04372394207a6bc27b80ad4a75ca151335b6b2534/diff",
"WorkDir": "/var/lib/docker/overlay2/83d95d0b735939a4a29b22a04372394207a6bc27b80ad4a75ca151335b6b2534/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "volume",
"Name": "addons-444927",
"Source": "/var/lib/docker/volumes/addons-444927/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "addons-444927",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "addons-444927",
"name.minikube.sigs.k8s.io": "addons-444927",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "142168837ddfcc616a83ce727af2935e56e87646539fb573615a064675a21b43",
"SandboxKey": "/var/run/docker/netns/142168837ddf",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32773"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32774"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32777"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32775"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32776"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"addons-444927": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:31:02",
"DriverOpts": null,
"NetworkID": "f0cf34e07a0770fabc43057d7e82ad303370397ce69358204b84f4691cfe4d51",
"EndpointID": "2a89d3bc55336ebfc48bd151fb50618036c258700d99dd31d15c282858ae35a2",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"addons-444927",
"0392e5be0055"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-444927 -n addons-444927
helpers_test.go:244: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p addons-444927 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-444927 logs -n 25: (1.124050718s)
helpers_test.go:252: TestAddons/parallel/LocalPath logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| delete | --all | minikube | jenkins | v1.35.0 | 10 Feb 25 12:32 UTC | 10 Feb 25 12:32 UTC |
| delete | -p download-only-867318 | download-only-867318 | jenkins | v1.35.0 | 10 Feb 25 12:32 UTC | 10 Feb 25 12:32 UTC |
| delete | -p download-only-424031 | download-only-424031 | jenkins | v1.35.0 | 10 Feb 25 12:32 UTC | 10 Feb 25 12:32 UTC |
| delete | -p download-only-867318 | download-only-867318 | jenkins | v1.35.0 | 10 Feb 25 12:32 UTC | 10 Feb 25 12:32 UTC |
| start | --download-only -p | download-docker-433372 | jenkins | v1.35.0 | 10 Feb 25 12:32 UTC | |
| | download-docker-433372 | | | | | |
| | --alsologtostderr | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p download-docker-433372 | download-docker-433372 | jenkins | v1.35.0 | 10 Feb 25 12:32 UTC | 10 Feb 25 12:32 UTC |
| start | --download-only -p | binary-mirror-655095 | jenkins | v1.35.0 | 10 Feb 25 12:32 UTC | |
| | binary-mirror-655095 | | | | | |
| | --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:37591 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p binary-mirror-655095 | binary-mirror-655095 | jenkins | v1.35.0 | 10 Feb 25 12:32 UTC | 10 Feb 25 12:32 UTC |
| addons | enable dashboard -p | addons-444927 | jenkins | v1.35.0 | 10 Feb 25 12:32 UTC | |
| | addons-444927 | | | | | |
| addons | disable dashboard -p | addons-444927 | jenkins | v1.35.0 | 10 Feb 25 12:32 UTC | |
| | addons-444927 | | | | | |
| start | -p addons-444927 --wait=true | addons-444927 | jenkins | v1.35.0 | 10 Feb 25 12:32 UTC | 10 Feb 25 12:34 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --addons=amd-gpu-device-plugin | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --addons=ingress | | | | | |
| | --addons=ingress-dns | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| addons | addons-444927 addons disable | addons-444927 | jenkins | v1.35.0 | 10 Feb 25 12:35 UTC | 10 Feb 25 12:35 UTC |
| | volcano --alsologtostderr -v=1 | | | | | |
| addons | addons-444927 addons disable | addons-444927 | jenkins | v1.35.0 | 10 Feb 25 12:35 UTC | 10 Feb 25 12:35 UTC |
| | gcp-auth --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| addons | enable headlamp | addons-444927 | jenkins | v1.35.0 | 10 Feb 25 12:35 UTC | 10 Feb 25 12:35 UTC |
| | -p addons-444927 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-444927 addons | addons-444927 | jenkins | v1.35.0 | 10 Feb 25 12:35 UTC | 10 Feb 25 12:35 UTC |
| | disable nvidia-device-plugin | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-444927 addons | addons-444927 | jenkins | v1.35.0 | 10 Feb 25 12:35 UTC | 10 Feb 25 12:35 UTC |
| | disable metrics-server | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-444927 addons disable | addons-444927 | jenkins | v1.35.0 | 10 Feb 25 12:35 UTC | 10 Feb 25 12:36 UTC |
| | headlamp --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| addons | addons-444927 addons | addons-444927 | jenkins | v1.35.0 | 10 Feb 25 12:36 UTC | 10 Feb 25 12:36 UTC |
| | disable cloud-spanner | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| ip | addons-444927 ip | addons-444927 | jenkins | v1.35.0 | 10 Feb 25 12:36 UTC | 10 Feb 25 12:36 UTC |
| addons | addons-444927 addons disable | addons-444927 | jenkins | v1.35.0 | 10 Feb 25 12:36 UTC | 10 Feb 25 12:36 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| addons | addons-444927 addons | addons-444927 | jenkins | v1.35.0 | 10 Feb 25 12:36 UTC | 10 Feb 25 12:36 UTC |
| | disable inspektor-gadget | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-444927 addons disable | addons-444927 | jenkins | v1.35.0 | 10 Feb 25 12:36 UTC | 10 Feb 25 12:36 UTC |
| | amd-gpu-device-plugin | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-444927 addons disable | addons-444927 | jenkins | v1.35.0 | 10 Feb 25 12:36 UTC | 10 Feb 25 12:36 UTC |
| | yakd --alsologtostderr -v=1 | | | | | |
| addons | addons-444927 addons | addons-444927 | jenkins | v1.35.0 | 10 Feb 25 12:37 UTC | 10 Feb 25 12:37 UTC |
| | disable volumesnapshots | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-444927 addons | addons-444927 | jenkins | v1.35.0 | 10 Feb 25 12:37 UTC | 10 Feb 25 12:37 UTC |
| | disable csi-hostpath-driver | | | | | |
| | --alsologtostderr -v=1 | | | | | |
|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2025/02/10 12:32:34
Running on machine: ubuntu-20-agent-15
Binary: Built with gc go1.23.4 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0210 12:32:34.668595 79643 out.go:345] Setting OutFile to fd 1 ...
I0210 12:32:34.668700 79643 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 12:32:34.668712 79643 out.go:358] Setting ErrFile to fd 2...
I0210 12:32:34.668718 79643 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 12:32:34.668934 79643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-71607/.minikube/bin
I0210 12:32:34.669556 79643 out.go:352] Setting JSON to false
I0210 12:32:34.670366 79643 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":11704,"bootTime":1739179051,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0210 12:32:34.670466 79643 start.go:139] virtualization: kvm guest
I0210 12:32:34.672668 79643 out.go:177] * [addons-444927] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
I0210 12:32:34.673970 79643 notify.go:220] Checking for updates...
I0210 12:32:34.673991 79643 out.go:177] - MINIKUBE_LOCATION=20390
I0210 12:32:34.675559 79643 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0210 12:32:34.676919 79643 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20390-71607/kubeconfig
I0210 12:32:34.678284 79643 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-71607/.minikube
I0210 12:32:34.679831 79643 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0210 12:32:34.681067 79643 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0210 12:32:34.682430 79643 driver.go:394] Setting default libvirt URI to qemu:///system
I0210 12:32:34.703533 79643 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
I0210 12:32:34.703622 79643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0210 12:32:34.749138 79643 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:47 SystemTime:2025-02-10 12:32:34.739909471 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0210 12:32:34.749238 79643 docker.go:318] overlay module found
I0210 12:32:34.751154 79643 out.go:177] * Using the docker driver based on user configuration
I0210 12:32:34.752616 79643 start.go:297] selected driver: docker
I0210 12:32:34.752634 79643 start.go:901] validating driver "docker" against <nil>
I0210 12:32:34.752646 79643 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0210 12:32:34.753453 79643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0210 12:32:34.797223 79643 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:47 SystemTime:2025-02-10 12:32:34.788905295 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0210 12:32:34.797390 79643 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0210 12:32:34.797613 79643 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0210 12:32:34.799191 79643 out.go:177] * Using Docker driver with root privileges
I0210 12:32:34.800544 79643 cni.go:84] Creating CNI manager for ""
I0210 12:32:34.800625 79643 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0210 12:32:34.800640 79643 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
I0210 12:32:34.800720 79643 start.go:340] cluster config:
{Name:addons-444927 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-444927 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: N
etworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 G
PUs: AutoPauseInterval:1m0s}
I0210 12:32:34.802060 79643 out.go:177] * Starting "addons-444927" primary control-plane node in "addons-444927" cluster
I0210 12:32:34.803239 79643 cache.go:121] Beginning downloading kic base image for docker with containerd
I0210 12:32:34.804545 79643 out.go:177] * Pulling base image v0.0.46 ...
I0210 12:32:34.805740 79643 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0210 12:32:34.805798 79643 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20390-71607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
I0210 12:32:34.805811 79643 cache.go:56] Caching tarball of preloaded images
I0210 12:32:34.805838 79643 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
I0210 12:32:34.805904 79643 preload.go:172] Found /home/jenkins/minikube-integration/20390-71607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0210 12:32:34.805919 79643 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
I0210 12:32:34.806248 79643 profile.go:143] Saving config to /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/config.json ...
I0210 12:32:34.806277 79643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/config.json: {Name:mk0dcd327ca51df60d1e98951b839a50c380ada6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0210 12:32:34.821959 79643 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
I0210 12:32:34.822098 79643 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory
I0210 12:32:34.822114 79643 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory, skipping pull
I0210 12:32:34.822119 79643 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in cache, skipping pull
I0210 12:32:34.822125 79643 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 as a tarball
I0210 12:32:34.822133 79643 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 from local cache
I0210 12:32:46.371979 79643 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 from cached tarball
I0210 12:32:46.372030 79643 cache.go:230] Successfully downloaded all kic artifacts
I0210 12:32:46.372071 79643 start.go:360] acquireMachinesLock for addons-444927: {Name:mke3114138a91c8004073314acab4a7dffe2d711 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0210 12:32:46.372184 79643 start.go:364] duration metric: took 86.427µs to acquireMachinesLock for "addons-444927"
I0210 12:32:46.372213 79643 start.go:93] Provisioning new machine with config: &{Name:addons-444927 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-444927 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0210 12:32:46.372298 79643 start.go:125] createHost starting for "" (driver="docker")
I0210 12:32:46.374333 79643 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
I0210 12:32:46.374564 79643 start.go:159] libmachine.API.Create for "addons-444927" (driver="docker")
I0210 12:32:46.374599 79643 client.go:168] LocalClient.Create starting
I0210 12:32:46.374709 79643 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20390-71607/.minikube/certs/ca.pem
I0210 12:32:46.670023 79643 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20390-71607/.minikube/certs/cert.pem
I0210 12:32:46.816347 79643 cli_runner.go:164] Run: docker network inspect addons-444927 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0210 12:32:46.832492 79643 cli_runner.go:211] docker network inspect addons-444927 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0210 12:32:46.832571 79643 network_create.go:284] running [docker network inspect addons-444927] to gather additional debugging logs...
I0210 12:32:46.832595 79643 cli_runner.go:164] Run: docker network inspect addons-444927
W0210 12:32:46.848507 79643 cli_runner.go:211] docker network inspect addons-444927 returned with exit code 1
I0210 12:32:46.848542 79643 network_create.go:287] error running [docker network inspect addons-444927]: docker network inspect addons-444927: exit status 1
stdout:
[]
stderr:
Error response from daemon: network addons-444927 not found
I0210 12:32:46.848555 79643 network_create.go:289] output of [docker network inspect addons-444927]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network addons-444927 not found
** /stderr **
I0210 12:32:46.848647 79643 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0210 12:32:46.865047 79643 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016fc7a0}
I0210 12:32:46.865092 79643 network_create.go:124] attempt to create docker network addons-444927 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0210 12:32:46.865151 79643 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-444927 addons-444927
I0210 12:32:46.925267 79643 network_create.go:108] docker network addons-444927 192.168.49.0/24 created
I0210 12:32:46.925301 79643 kic.go:121] calculated static IP "192.168.49.2" for the "addons-444927" container
I0210 12:32:46.925367 79643 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0210 12:32:46.941561 79643 cli_runner.go:164] Run: docker volume create addons-444927 --label name.minikube.sigs.k8s.io=addons-444927 --label created_by.minikube.sigs.k8s.io=true
I0210 12:32:46.959106 79643 oci.go:103] Successfully created a docker volume addons-444927
I0210 12:32:46.959194 79643 cli_runner.go:164] Run: docker run --rm --name addons-444927-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-444927 --entrypoint /usr/bin/test -v addons-444927:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
I0210 12:32:54.002619 79643 cli_runner.go:217] Completed: docker run --rm --name addons-444927-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-444927 --entrypoint /usr/bin/test -v addons-444927:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib: (7.043375908s)
I0210 12:32:54.002649 79643 oci.go:107] Successfully prepared a docker volume addons-444927
I0210 12:32:54.002669 79643 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0210 12:32:54.002691 79643 kic.go:194] Starting extracting preloaded images to volume ...
I0210 12:32:54.002751 79643 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20390-71607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-444927:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
I0210 12:32:58.463230 79643 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20390-71607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-444927:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir: (4.460438996s)
I0210 12:32:58.463262 79643 kic.go:203] duration metric: took 4.460568319s to extract preloaded images to volume ...
W0210 12:32:58.463401 79643 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0210 12:32:58.463509 79643 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0210 12:32:58.509019 79643 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-444927 --name addons-444927 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-444927 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-444927 --network addons-444927 --ip 192.168.49.2 --volume addons-444927:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
I0210 12:32:58.807006 79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Running}}
I0210 12:32:58.825650 79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
I0210 12:32:58.843974 79643 cli_runner.go:164] Run: docker exec addons-444927 stat /var/lib/dpkg/alternatives/iptables
I0210 12:32:58.885985 79643 oci.go:144] the created container "addons-444927" has a running status.
I0210 12:32:58.886014 79643 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa...
I0210 12:32:59.099507 79643 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0210 12:32:59.124199 79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
I0210 12:32:59.145966 79643 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0210 12:32:59.145986 79643 kic_runner.go:114] Args: [docker exec --privileged addons-444927 chown docker:docker /home/docker/.ssh/authorized_keys]
I0210 12:32:59.195806 79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
I0210 12:32:59.215157 79643 machine.go:93] provisionDockerMachine start ...
I0210 12:32:59.215255 79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
I0210 12:32:59.238965 79643 main.go:141] libmachine: Using SSH client type: native
I0210 12:32:59.239204 79643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil> [] 0s} 127.0.0.1 32773 <nil> <nil>}
I0210 12:32:59.239218 79643 main.go:141] libmachine: About to run SSH command:
hostname
I0210 12:32:59.467760 79643 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-444927
I0210 12:32:59.467796 79643 ubuntu.go:169] provisioning hostname "addons-444927"
I0210 12:32:59.467860 79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
I0210 12:32:59.485324 79643 main.go:141] libmachine: Using SSH client type: native
I0210 12:32:59.485504 79643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil> [] 0s} 127.0.0.1 32773 <nil> <nil>}
I0210 12:32:59.485518 79643 main.go:141] libmachine: About to run SSH command:
sudo hostname addons-444927 && echo "addons-444927" | sudo tee /etc/hostname
I0210 12:32:59.623529 79643 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-444927
I0210 12:32:59.623612 79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
I0210 12:32:59.640437 79643 main.go:141] libmachine: Using SSH client type: native
I0210 12:32:59.640700 79643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil> [] 0s} 127.0.0.1 32773 <nil> <nil>}
I0210 12:32:59.640728 79643 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-444927' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-444927/g' /etc/hosts;
else
echo '127.0.1.1 addons-444927' | sudo tee -a /etc/hosts;
fi
fi
I0210 12:32:59.768448 79643 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0210 12:32:59.768498 79643 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20390-71607/.minikube CaCertPath:/home/jenkins/minikube-integration/20390-71607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20390-71607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20390-71607/.minikube}
I0210 12:32:59.768522 79643 ubuntu.go:177] setting up certificates
I0210 12:32:59.768534 79643 provision.go:84] configureAuth start
I0210 12:32:59.768619 79643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-444927
I0210 12:32:59.784829 79643 provision.go:143] copyHostCerts
I0210 12:32:59.784902 79643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-71607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20390-71607/.minikube/ca.pem (1082 bytes)
I0210 12:32:59.785015 79643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-71607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20390-71607/.minikube/cert.pem (1123 bytes)
I0210 12:32:59.785076 79643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-71607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20390-71607/.minikube/key.pem (1675 bytes)
I0210 12:32:59.785125 79643 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20390-71607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20390-71607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20390-71607/.minikube/certs/ca-key.pem org=jenkins.addons-444927 san=[127.0.0.1 192.168.49.2 addons-444927 localhost minikube]
I0210 12:33:00.067778 79643 provision.go:177] copyRemoteCerts
I0210 12:33:00.067835 79643 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0210 12:33:00.067868 79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
I0210 12:33:00.084351 79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
I0210 12:33:00.180777 79643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-71607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0210 12:33:00.202111 79643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-71607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0210 12:33:00.223345 79643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-71607/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0210 12:33:00.244932 79643 provision.go:87] duration metric: took 476.378012ms to configureAuth
I0210 12:33:00.244972 79643 ubuntu.go:193] setting minikube options for container-runtime
I0210 12:33:00.245142 79643 config.go:182] Loaded profile config "addons-444927": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0210 12:33:00.245154 79643 machine.go:96] duration metric: took 1.029975044s to provisionDockerMachine
I0210 12:33:00.245161 79643 client.go:171] duration metric: took 13.870552404s to LocalClient.Create
I0210 12:33:00.245177 79643 start.go:167] duration metric: took 13.870614609s to libmachine.API.Create "addons-444927"
I0210 12:33:00.245186 79643 start.go:293] postStartSetup for "addons-444927" (driver="docker")
I0210 12:33:00.245195 79643 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0210 12:33:00.245240 79643 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0210 12:33:00.245273 79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
I0210 12:33:00.261834 79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
I0210 12:33:00.353181 79643 ssh_runner.go:195] Run: cat /etc/os-release
I0210 12:33:00.356287 79643 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0210 12:33:00.356330 79643 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0210 12:33:00.356344 79643 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0210 12:33:00.356353 79643 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0210 12:33:00.356365 79643 filesync.go:126] Scanning /home/jenkins/minikube-integration/20390-71607/.minikube/addons for local assets ...
I0210 12:33:00.356439 79643 filesync.go:126] Scanning /home/jenkins/minikube-integration/20390-71607/.minikube/files for local assets ...
I0210 12:33:00.356490 79643 start.go:296] duration metric: took 111.296631ms for postStartSetup
I0210 12:33:00.356787 79643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-444927
I0210 12:33:00.373260 79643 profile.go:143] Saving config to /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/config.json ...
I0210 12:33:00.373505 79643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0210 12:33:00.373603 79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
I0210 12:33:00.389977 79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
I0210 12:33:00.477112 79643 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0210 12:33:00.481085 79643 start.go:128] duration metric: took 14.108767844s to createHost
I0210 12:33:00.481152 79643 start.go:83] releasing machines lock for "addons-444927", held for 14.108913022s
I0210 12:33:00.481227 79643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-444927
I0210 12:33:00.497622 79643 ssh_runner.go:195] Run: cat /version.json
I0210 12:33:00.497685 79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
I0210 12:33:00.497725 79643 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0210 12:33:00.497815 79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
I0210 12:33:00.514856 79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
I0210 12:33:00.514978 79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
I0210 12:33:00.670869 79643 ssh_runner.go:195] Run: systemctl --version
I0210 12:33:00.675037 79643 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0210 12:33:00.679064 79643 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0210 12:33:00.701681 79643 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0210 12:33:00.701753 79643 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0210 12:33:00.727117 79643 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0210 12:33:00.727142 79643 start.go:495] detecting cgroup driver to use...
I0210 12:33:00.727175 79643 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0210 12:33:00.727217 79643 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0210 12:33:00.738573 79643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0210 12:33:00.749012 79643 docker.go:217] disabling cri-docker service (if available) ...
I0210 12:33:00.749070 79643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0210 12:33:00.761823 79643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0210 12:33:00.774908 79643 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0210 12:33:00.850817 79643 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0210 12:33:00.931903 79643 docker.go:233] disabling docker service ...
I0210 12:33:00.931980 79643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0210 12:33:00.949608 79643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0210 12:33:00.960603 79643 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0210 12:33:01.038638 79643 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0210 12:33:01.122211 79643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0210 12:33:01.132787 79643 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0210 12:33:01.147083 79643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0210 12:33:01.155684 79643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0210 12:33:01.164180 79643 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0210 12:33:01.164236 79643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0210 12:33:01.172718 79643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0210 12:33:01.181029 79643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0210 12:33:01.189141 79643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0210 12:33:01.197558 79643 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0210 12:33:01.205840 79643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0210 12:33:01.214568 79643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0210 12:33:01.223418 79643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0210 12:33:01.232422 79643 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0210 12:33:01.240229 79643 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0210 12:33:01.247880 79643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0210 12:33:01.318524 79643 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0210 12:33:01.416117 79643 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0210 12:33:01.416194 79643 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0210 12:33:01.419682 79643 start.go:563] Will wait 60s for crictl version
I0210 12:33:01.419727 79643 ssh_runner.go:195] Run: which crictl
I0210 12:33:01.422719 79643 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0210 12:33:01.454910 79643 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.24
RuntimeApiVersion: v1
I0210 12:33:01.455010 79643 ssh_runner.go:195] Run: containerd --version
I0210 12:33:01.476888 79643 ssh_runner.go:195] Run: containerd --version
I0210 12:33:01.500523 79643 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.24 ...
I0210 12:33:01.501913 79643 cli_runner.go:164] Run: docker network inspect addons-444927 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0210 12:33:01.518074 79643 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I0210 12:33:01.521686 79643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0210 12:33:01.531566 79643 kubeadm.go:883] updating cluster {Name:addons-444927 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-444927 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0210 12:33:01.531685 79643 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0210 12:33:01.531732 79643 ssh_runner.go:195] Run: sudo crictl images --output json
I0210 12:33:01.562974 79643 containerd.go:627] all images are preloaded for containerd runtime.
I0210 12:33:01.563001 79643 containerd.go:534] Images already preloaded, skipping extraction
I0210 12:33:01.563047 79643 ssh_runner.go:195] Run: sudo crictl images --output json
I0210 12:33:01.592555 79643 containerd.go:627] all images are preloaded for containerd runtime.
I0210 12:33:01.592579 79643 cache_images.go:84] Images are preloaded, skipping loading
I0210 12:33:01.592587 79643 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.32.1 containerd true true} ...
I0210 12:33:01.592682 79643 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-444927 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.32.1 ClusterName:addons-444927 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0210 12:33:01.592735 79643 ssh_runner.go:195] Run: sudo crictl info
I0210 12:33:01.623452 79643 cni.go:84] Creating CNI manager for ""
I0210 12:33:01.623477 79643 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0210 12:33:01.623486 79643 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0210 12:33:01.623507 79643 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-444927 NodeName:addons-444927 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0210 12:33:01.623615 79643 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "addons-444927"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.49.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0210 12:33:01.623671 79643 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
I0210 12:33:01.631750 79643 binaries.go:44] Found k8s binaries, skipping transfer
I0210 12:33:01.631825 79643 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0210 12:33:01.639867 79643 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
I0210 12:33:01.655786 79643 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0210 12:33:01.671898 79643 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2303 bytes)
I0210 12:33:01.687899 79643 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0210 12:33:01.691139 79643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0210 12:33:01.700866 79643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0210 12:33:01.773557 79643 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0210 12:33:01.785744 79643 certs.go:68] Setting up /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927 for IP: 192.168.49.2
I0210 12:33:01.785779 79643 certs.go:194] generating shared ca certs ...
I0210 12:33:01.785794 79643 certs.go:226] acquiring lock for ca certs: {Name:mked3bdcf754b16a474f1226f12a3cc337a7b998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0210 12:33:01.785949 79643 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20390-71607/.minikube/ca.key
I0210 12:33:01.922615 79643 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20390-71607/.minikube/ca.crt ...
I0210 12:33:01.922647 79643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-71607/.minikube/ca.crt: {Name:mkef3eef409099ff0f7e44091834829fbad35c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0210 12:33:01.922817 79643 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20390-71607/.minikube/ca.key ...
I0210 12:33:01.922828 79643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-71607/.minikube/ca.key: {Name:mk510fc2adf34c3fc31ae26cb281e5b8ef5ec290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0210 12:33:01.922905 79643 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20390-71607/.minikube/proxy-client-ca.key
I0210 12:33:02.094700 79643 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20390-71607/.minikube/proxy-client-ca.crt ...
I0210 12:33:02.094732 79643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-71607/.minikube/proxy-client-ca.crt: {Name:mk889e108ee6d8144896b8270af91bb2b556eda1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0210 12:33:02.094887 79643 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20390-71607/.minikube/proxy-client-ca.key ...
I0210 12:33:02.094898 79643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-71607/.minikube/proxy-client-ca.key: {Name:mk9c02ad352a509adb756091b4a5154f9677764d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0210 12:33:02.094966 79643 certs.go:256] generating profile certs ...
I0210 12:33:02.095031 79643 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/client.key
I0210 12:33:02.095047 79643 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/client.crt with IP's: []
I0210 12:33:02.266358 79643 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/client.crt ...
I0210 12:33:02.266388 79643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/client.crt: {Name:mk23970da64f703f5906c3bd636af5390226c140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0210 12:33:02.266544 79643 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/client.key ...
I0210 12:33:02.266554 79643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/client.key: {Name:mk6f35996bfc4d32b768f388cc84408562d576a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0210 12:33:02.266622 79643 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/apiserver.key.f68b14cb
I0210 12:33:02.266640 79643 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/apiserver.crt.f68b14cb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I0210 12:33:02.401191 79643 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/apiserver.crt.f68b14cb ...
I0210 12:33:02.401223 79643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/apiserver.crt.f68b14cb: {Name:mk7c0503d9797503b266466a39d8a570eeb5c34c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0210 12:33:02.401377 79643 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/apiserver.key.f68b14cb ...
I0210 12:33:02.401390 79643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/apiserver.key.f68b14cb: {Name:mkae929dc7ca816ce3b467ef209a3ef4562dfbff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0210 12:33:02.401458 79643 certs.go:381] copying /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/apiserver.crt.f68b14cb -> /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/apiserver.crt
I0210 12:33:02.401527 79643 certs.go:385] copying /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/apiserver.key.f68b14cb -> /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/apiserver.key
I0210 12:33:02.401570 79643 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/proxy-client.key
I0210 12:33:02.401588 79643 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/proxy-client.crt with IP's: []
I0210 12:33:02.532153 79643 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/proxy-client.crt ...
I0210 12:33:02.532189 79643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/proxy-client.crt: {Name:mk9cb3b3502b3d4f0b30dd8eab54a1fb94cedbd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0210 12:33:02.532392 79643 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/proxy-client.key ...
I0210 12:33:02.532414 79643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/proxy-client.key: {Name:mk569b49c57735c308242a9566a3f99c6d61a13d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0210 12:33:02.532659 79643 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-71607/.minikube/certs/ca-key.pem (1679 bytes)
I0210 12:33:02.532702 79643 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-71607/.minikube/certs/ca.pem (1082 bytes)
I0210 12:33:02.532731 79643 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-71607/.minikube/certs/cert.pem (1123 bytes)
I0210 12:33:02.532766 79643 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-71607/.minikube/certs/key.pem (1675 bytes)
I0210 12:33:02.533319 79643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-71607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0210 12:33:02.556004 79643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-71607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0210 12:33:02.578132 79643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-71607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0210 12:33:02.600143 79643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-71607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0210 12:33:02.621953 79643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I0210 12:33:02.643955 79643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0210 12:33:02.666085 79643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0210 12:33:02.687771 79643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0210 12:33:02.709315 79643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-71607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0210 12:33:02.730785 79643 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0210 12:33:02.747042 79643 ssh_runner.go:195] Run: openssl version
I0210 12:33:02.752300 79643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0210 12:33:02.761012 79643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0210 12:33:02.764162 79643 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 12:33 /usr/share/ca-certificates/minikubeCA.pem
I0210 12:33:02.764221 79643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0210 12:33:02.770444 79643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0210 12:33:02.778794 79643 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0210 12:33:02.781832 79643 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0210 12:33:02.781884 79643 kubeadm.go:392] StartCluster: {Name:addons-444927 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-444927 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socket
VMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0210 12:33:02.781982 79643 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0210 12:33:02.782045 79643 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0210 12:33:02.813399 79643 cri.go:89] found id: ""
I0210 12:33:02.813469 79643 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0210 12:33:02.821478 79643 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0210 12:33:02.829378 79643 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I0210 12:33:02.829429 79643 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0210 12:33:02.837541 79643 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0210 12:33:02.837565 79643 kubeadm.go:157] found existing configuration files:
I0210 12:33:02.837614 79643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0210 12:33:02.845977 79643 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0210 12:33:02.846075 79643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0210 12:33:02.853868 79643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0210 12:33:02.861974 79643 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0210 12:33:02.862045 79643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0210 12:33:02.869989 79643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0210 12:33:02.878067 79643 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0210 12:33:02.878119 79643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0210 12:33:02.885629 79643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0210 12:33:02.893262 79643 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0210 12:33:02.893340 79643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0210 12:33:02.900797 79643 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0210 12:33:02.936211 79643 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
I0210 12:33:02.936276 79643 kubeadm.go:310] [preflight] Running pre-flight checks
I0210 12:33:02.952069 79643 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
I0210 12:33:02.952230 79643 kubeadm.go:310] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1075-gcp[0m
I0210 12:33:02.952305 79643 kubeadm.go:310] [0;37mOS[0m: [0;32mLinux[0m
I0210 12:33:02.952385 79643 kubeadm.go:310] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0210 12:33:02.952461 79643 kubeadm.go:310] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0210 12:33:02.952541 79643 kubeadm.go:310] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0210 12:33:02.952607 79643 kubeadm.go:310] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0210 12:33:02.952681 79643 kubeadm.go:310] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0210 12:33:02.952751 79643 kubeadm.go:310] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0210 12:33:02.952812 79643 kubeadm.go:310] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0210 12:33:02.952882 79643 kubeadm.go:310] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0210 12:33:02.952962 79643 kubeadm.go:310] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0210 12:33:03.002868 79643 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0210 12:33:03.003011 79643 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0210 12:33:03.003158 79643 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0210 12:33:03.007775 79643 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0210 12:33:03.010521 79643 out.go:235] - Generating certificates and keys ...
I0210 12:33:03.010635 79643 kubeadm.go:310] [certs] Using existing ca certificate authority
I0210 12:33:03.010732 79643 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0210 12:33:03.261144 79643 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0210 12:33:03.539869 79643 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0210 12:33:03.694466 79643 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0210 12:33:03.760391 79643 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0210 12:33:03.947188 79643 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0210 12:33:03.947334 79643 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-444927 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0210 12:33:04.034279 79643 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0210 12:33:04.034393 79643 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-444927 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0210 12:33:04.120259 79643 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0210 12:33:04.250619 79643 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0210 12:33:04.377826 79643 kubeadm.go:310] [certs] Generating "sa" key and public key
I0210 12:33:04.377894 79643 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0210 12:33:04.449748 79643 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0210 12:33:04.830769 79643 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0210 12:33:04.953602 79643 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0210 12:33:05.249944 79643 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0210 12:33:05.415685 79643 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0210 12:33:05.416138 79643 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0210 12:33:05.418601 79643 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0210 12:33:05.420455 79643 out.go:235] - Booting up control plane ...
I0210 12:33:05.420574 79643 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0210 12:33:05.420666 79643 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0210 12:33:05.421715 79643 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0210 12:33:05.434153 79643 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0210 12:33:05.439211 79643 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0210 12:33:05.439306 79643 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0210 12:33:05.523893 79643 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0210 12:33:05.524020 79643 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0210 12:33:06.525284 79643 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001493806s
I0210 12:33:06.525390 79643 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0210 12:33:11.026633 79643 kubeadm.go:310] [api-check] The API server is healthy after 4.501315709s
I0210 12:33:11.037920 79643 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0210 12:33:11.048457 79643 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0210 12:33:11.067472 79643 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0210 12:33:11.067810 79643 kubeadm.go:310] [mark-control-plane] Marking the node addons-444927 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0210 12:33:11.076130 79643 kubeadm.go:310] [bootstrap-token] Using token: 2ofei0.shg6irm5a7ti5w06
I0210 12:33:11.077645 79643 out.go:235] - Configuring RBAC rules ...
I0210 12:33:11.077843 79643 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0210 12:33:11.081018 79643 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0210 12:33:11.087190 79643 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0210 12:33:11.089603 79643 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0210 12:33:11.092120 79643 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0210 12:33:11.094569 79643 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0210 12:33:11.432371 79643 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0210 12:33:11.850545 79643 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0210 12:33:12.432775 79643 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0210 12:33:12.433534 79643 kubeadm.go:310]
I0210 12:33:12.433594 79643 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0210 12:33:12.433601 79643 kubeadm.go:310]
I0210 12:33:12.433661 79643 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0210 12:33:12.433668 79643 kubeadm.go:310]
I0210 12:33:12.433687 79643 kubeadm.go:310] mkdir -p $HOME/.kube
I0210 12:33:12.433740 79643 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0210 12:33:12.433782 79643 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0210 12:33:12.433789 79643 kubeadm.go:310]
I0210 12:33:12.433834 79643 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0210 12:33:12.433840 79643 kubeadm.go:310]
I0210 12:33:12.433876 79643 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0210 12:33:12.433883 79643 kubeadm.go:310]
I0210 12:33:12.433924 79643 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0210 12:33:12.433989 79643 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0210 12:33:12.434047 79643 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0210 12:33:12.434053 79643 kubeadm.go:310]
I0210 12:33:12.434118 79643 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0210 12:33:12.434183 79643 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0210 12:33:12.434191 79643 kubeadm.go:310]
I0210 12:33:12.434319 79643 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2ofei0.shg6irm5a7ti5w06 \
I0210 12:33:12.434482 79643 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:a04e7adba77e55f6c403d6b6702c62e468700cf463ec68bf30f3cb8b7b5deb33 \
I0210 12:33:12.434510 79643 kubeadm.go:310] --control-plane
I0210 12:33:12.434515 79643 kubeadm.go:310]
I0210 12:33:12.434591 79643 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0210 12:33:12.434606 79643 kubeadm.go:310]
I0210 12:33:12.434668 79643 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2ofei0.shg6irm5a7ti5w06 \
I0210 12:33:12.434815 79643 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:a04e7adba77e55f6c403d6b6702c62e468700cf463ec68bf30f3cb8b7b5deb33
I0210 12:33:12.437045 79643 kubeadm.go:310] [WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
I0210 12:33:12.437242 79643 kubeadm.go:310] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1075-gcp\n", err: exit status 1
I0210 12:33:12.437355 79643 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0210 12:33:12.437387 79643 cni.go:84] Creating CNI manager for ""
I0210 12:33:12.437397 79643 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0210 12:33:12.439477 79643 out.go:177] * Configuring CNI (Container Networking Interface) ...
I0210 12:33:12.441109 79643 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0210 12:33:12.444652 79643 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
I0210 12:33:12.444669 79643 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
I0210 12:33:12.461126 79643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0210 12:33:12.654936 79643 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0210 12:33:12.655043 79643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0210 12:33:12.655064 79643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-444927 minikube.k8s.io/updated_at=2025_02_10T12_33_12_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=7d7e9539cf1c3abd6114cdafa89e43b830da4e04 minikube.k8s.io/name=addons-444927 minikube.k8s.io/primary=true
I0210 12:33:12.662283 79643 ops.go:34] apiserver oom_adj: -16
I0210 12:33:12.737974 79643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0210 12:33:13.238514 79643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0210 12:33:13.738725 79643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0210 12:33:14.238636 79643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0210 12:33:14.738851 79643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0210 12:33:15.238406 79643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0210 12:33:15.738133 79643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0210 12:33:16.238615 79643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0210 12:33:16.738130 79643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0210 12:33:16.799926 79643 kubeadm.go:1113] duration metric: took 4.144944938s to wait for elevateKubeSystemPrivileges
I0210 12:33:16.799969 79643 kubeadm.go:394] duration metric: took 14.018089958s to StartCluster
I0210 12:33:16.799994 79643 settings.go:142] acquiring lock: {Name:mk48700407fa7ae208a78ae38cd1ed6c94166a30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0210 12:33:16.800148 79643 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20390-71607/kubeconfig
I0210 12:33:16.800846 79643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-71607/kubeconfig: {Name:mk5db87da690cfc2ed8765dd4558179e05f09057 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0210 12:33:16.801037 79643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0210 12:33:16.801046 79643 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0210 12:33:16.801105 79643 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I0210 12:33:16.801276 79643 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-444927"
I0210 12:33:16.801301 79643 addons.go:69] Setting yakd=true in profile "addons-444927"
I0210 12:33:16.801323 79643 config.go:182] Loaded profile config "addons-444927": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0210 12:33:16.801337 79643 addons.go:69] Setting registry=true in profile "addons-444927"
I0210 12:33:16.801721 79643 addons.go:238] Setting addon registry=true in "addons-444927"
I0210 12:33:16.801283 79643 addons.go:69] Setting cloud-spanner=true in profile "addons-444927"
I0210 12:33:16.801745 79643 addons.go:69] Setting gcp-auth=true in profile "addons-444927"
I0210 12:33:16.801774 79643 mustload.go:65] Loading cluster: addons-444927
I0210 12:33:16.801783 79643 addons.go:238] Setting addon cloud-spanner=true in "addons-444927"
I0210 12:33:16.801806 79643 host.go:66] Checking if "addons-444927" exists ...
I0210 12:33:16.801833 79643 host.go:66] Checking if "addons-444927" exists ...
I0210 12:33:16.801295 79643 addons.go:69] Setting default-storageclass=true in profile "addons-444927"
I0210 12:33:16.801896 79643 addons.go:69] Setting metrics-server=true in profile "addons-444927"
I0210 12:33:16.801324 79643 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-444927"
I0210 12:33:16.801692 79643 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-444927"
I0210 12:33:16.801983 79643 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-444927"
I0210 12:33:16.802032 79643 host.go:66] Checking if "addons-444927" exists ...
I0210 12:33:16.802056 79643 addons.go:69] Setting volumesnapshots=true in profile "addons-444927"
I0210 12:33:16.802074 79643 addons.go:238] Setting addon volumesnapshots=true in "addons-444927"
I0210 12:33:16.802087 79643 host.go:66] Checking if "addons-444927" exists ...
I0210 12:33:16.802095 79643 host.go:66] Checking if "addons-444927" exists ...
I0210 12:33:16.802578 79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
I0210 12:33:16.802707 79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
I0210 12:33:16.802724 79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
I0210 12:33:16.801365 79643 addons.go:69] Setting inspektor-gadget=true in profile "addons-444927"
I0210 12:33:16.802865 79643 addons.go:238] Setting addon inspektor-gadget=true in "addons-444927"
I0210 12:33:16.802893 79643 host.go:66] Checking if "addons-444927" exists ...
I0210 12:33:16.803008 79643 addons.go:238] Setting addon metrics-server=true in "addons-444927"
I0210 12:33:16.803074 79643 host.go:66] Checking if "addons-444927" exists ...
I0210 12:33:16.803473 79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
I0210 12:33:16.803654 79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
I0210 12:33:16.802083 79643 config.go:182] Loaded profile config "addons-444927": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0210 12:33:16.804831 79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
I0210 12:33:16.802723 79643 addons.go:69] Setting ingress=true in profile "addons-444927"
I0210 12:33:16.805053 79643 addons.go:238] Setting addon ingress=true in "addons-444927"
I0210 12:33:16.805115 79643 host.go:66] Checking if "addons-444927" exists ...
I0210 12:33:16.801707 79643 addons.go:69] Setting ingress-dns=true in profile "addons-444927"
I0210 12:33:16.805183 79643 addons.go:238] Setting addon ingress-dns=true in "addons-444927"
I0210 12:33:16.802710 79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
I0210 12:33:16.805211 79643 out.go:177] * Verifying Kubernetes components...
I0210 12:33:16.801928 79643 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-444927"
I0210 12:33:16.804079 79643 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-444927"
I0210 12:33:16.806128 79643 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-444927"
I0210 12:33:16.806168 79643 host.go:66] Checking if "addons-444927" exists ...
I0210 12:33:16.804534 79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
I0210 12:33:16.806798 79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
I0210 12:33:16.806854 79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
I0210 12:33:16.807110 79643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0210 12:33:16.807166 79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
I0210 12:33:16.805226 79643 host.go:66] Checking if "addons-444927" exists ...
I0210 12:33:16.801961 79643 addons.go:69] Setting volcano=true in profile "addons-444927"
I0210 12:33:16.807623 79643 addons.go:238] Setting addon volcano=true in "addons-444927"
I0210 12:33:16.807672 79643 host.go:66] Checking if "addons-444927" exists ...
I0210 12:33:16.801327 79643 addons.go:238] Setting addon yakd=true in "addons-444927"
I0210 12:33:16.808231 79643 host.go:66] Checking if "addons-444927" exists ...
I0210 12:33:16.803843 79643 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-444927"
I0210 12:33:16.810228 79643 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-444927"
I0210 12:33:16.810623 79643 addons.go:69] Setting storage-provisioner=true in profile "addons-444927"
I0210 12:33:16.810680 79643 addons.go:238] Setting addon storage-provisioner=true in "addons-444927"
I0210 12:33:16.810721 79643 host.go:66] Checking if "addons-444927" exists ...
I0210 12:33:16.838134 79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
I0210 12:33:16.838559 79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
I0210 12:33:16.839035 79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
I0210 12:33:16.843204 79643 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I0210 12:33:16.844601 79643 host.go:66] Checking if "addons-444927" exists ...
I0210 12:33:16.844640 79643 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0210 12:33:16.844654 79643 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0210 12:33:16.844705 79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
I0210 12:33:16.846404 79643 out.go:177] - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
I0210 12:33:16.848437 79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
I0210 12:33:16.849021 79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
I0210 12:33:16.851657 79643 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I0210 12:33:16.853671 79643 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
I0210 12:33:16.854815 79643 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I0210 12:33:16.854923 79643 out.go:177] - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
I0210 12:33:16.856200 79643 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I0210 12:33:16.856220 79643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
I0210 12:33:16.856277 79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
I0210 12:33:16.856427 79643 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
I0210 12:33:16.861474 79643 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I0210 12:33:16.861900 79643 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
I0210 12:33:16.861935 79643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I0210 12:33:16.861998 79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
I0210 12:33:16.864666 79643 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I0210 12:33:16.866964 79643 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I0210 12:33:16.868311 79643 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I0210 12:33:16.870331 79643 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I0210 12:33:16.871537 79643 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I0210 12:33:16.872747 79643 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0210 12:33:16.872780 79643 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0210 12:33:16.872781 79643 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
I0210 12:33:16.872941 79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
I0210 12:33:16.873000 79643 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.28
I0210 12:33:16.874318 79643 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
I0210 12:33:16.874343 79643 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
I0210 12:33:16.874433 79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
I0210 12:33:16.874435 79643 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
I0210 12:33:16.874452 79643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I0210 12:33:16.874502 79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
I0210 12:33:16.882101 79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
I0210 12:33:16.885760 79643 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-444927"
I0210 12:33:16.885811 79643 host.go:66] Checking if "addons-444927" exists ...
I0210 12:33:16.886297 79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
I0210 12:33:16.898880 79643 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I0210 12:33:16.898880 79643 out.go:177] - Using image docker.io/registry:2.8.3
I0210 12:33:16.900066 79643 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
I0210 12:33:16.900088 79643 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I0210 12:33:16.900152 79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
I0210 12:33:16.901532 79643 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
I0210 12:33:16.901533 79643 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
I0210 12:33:16.902819 79643 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0210 12:33:16.902843 79643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I0210 12:33:16.902898 79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
I0210 12:33:16.903130 79643 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
I0210 12:33:16.903146 79643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I0210 12:33:16.903189 79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
I0210 12:33:16.908852 79643 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0210 12:33:16.910103 79643 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0210 12:33:16.910127 79643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0210 12:33:16.910188 79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
I0210 12:33:16.913402 79643 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I0210 12:33:16.914782 79643 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0210 12:33:16.914802 79643 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0210 12:33:16.914859 79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
I0210 12:33:16.916205 79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
I0210 12:33:16.919705 79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
I0210 12:33:16.931685 79643 out.go:177] - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
I0210 12:33:16.933094 79643 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I0210 12:33:16.933122 79643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
I0210 12:33:16.933185 79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
I0210 12:33:16.938961 79643 out.go:177] - Using image docker.io/volcanosh/vc-webhook-manager:v1.11.0
I0210 12:33:16.939483 79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
I0210 12:33:16.943779 79643 out.go:177] - Using image docker.io/volcanosh/vc-controller-manager:v1.11.0
I0210 12:33:16.944883 79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
I0210 12:33:16.946437 79643 out.go:177] - Using image docker.io/volcanosh/vc-scheduler:v1.11.0
I0210 12:33:16.949094 79643 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
I0210 12:33:16.949122 79643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (480278 bytes)
I0210 12:33:16.949184 79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
I0210 12:33:16.949968 79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
I0210 12:33:16.951958 79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
I0210 12:33:16.955010 79643 addons.go:238] Setting addon default-storageclass=true in "addons-444927"
I0210 12:33:16.955054 79643 host.go:66] Checking if "addons-444927" exists ...
I0210 12:33:16.955245 79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
I0210 12:33:16.955501 79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
I0210 12:33:16.964944 79643 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I0210 12:33:16.965016 79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
I0210 12:33:16.965559 79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
I0210 12:33:16.967762 79643 out.go:177] - Using image docker.io/busybox:stable
I0210 12:33:16.968721 79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
I0210 12:33:16.969151 79643 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0210 12:33:16.969168 79643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I0210 12:33:16.969216 79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
I0210 12:33:16.970981 79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
I0210 12:33:16.978773 79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
I0210 12:33:16.980092 79643 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0210 12:33:16.980112 79643 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0210 12:33:16.980161 79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
I0210 12:33:16.990566 79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
I0210 12:33:17.024017 79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
W0210 12:33:17.086366 79643 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
I0210 12:33:17.086404 79643 retry.go:31] will retry after 310.475021ms: ssh: handshake failed: EOF
I0210 12:33:17.212008 79643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0210 12:33:17.219415 79643 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0210 12:33:17.219536 79643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0210 12:33:17.220033 79643 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0210 12:33:17.220054 79643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0210 12:33:17.220482 79643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
I0210 12:33:17.395012 79643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I0210 12:33:17.403495 79643 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0210 12:33:17.403528 79643 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0210 12:33:17.486191 79643 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
I0210 12:33:17.486223 79643 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I0210 12:33:17.486380 79643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0210 12:33:17.486484 79643 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I0210 12:33:17.486501 79643 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I0210 12:33:17.487139 79643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0210 12:33:17.497956 79643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I0210 12:33:17.503609 79643 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
I0210 12:33:17.503700 79643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
I0210 12:33:17.607746 79643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I0210 12:33:17.694547 79643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0210 12:33:17.701702 79643 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0210 12:33:17.701736 79643 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I0210 12:33:17.789601 79643 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0210 12:33:17.789702 79643 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0210 12:33:17.790762 79643 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0210 12:33:17.790790 79643 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0210 12:33:17.804790 79643 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
I0210 12:33:17.804893 79643 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0210 12:33:17.805936 79643 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
I0210 12:33:17.806057 79643 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I0210 12:33:17.893326 79643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I0210 12:33:17.895640 79643 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0210 12:33:17.895720 79643 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I0210 12:33:18.098746 79643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0210 12:33:18.200130 79643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0210 12:33:18.287135 79643 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
I0210 12:33:18.287163 79643 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I0210 12:33:18.297557 79643 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0210 12:33:18.297648 79643 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0210 12:33:18.386947 79643 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
I0210 12:33:18.387033 79643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I0210 12:33:18.398777 79643 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0210 12:33:18.398872 79643 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I0210 12:33:18.786467 79643 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0210 12:33:18.786741 79643 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0210 12:33:18.786700 79643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0210 12:33:18.799082 79643 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0210 12:33:18.799169 79643 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I0210 12:33:18.800278 79643 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
I0210 12:33:18.800343 79643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I0210 12:33:19.300183 79643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.088125694s)
I0210 12:33:19.300304 79643 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.080807397s)
I0210 12:33:19.301465 79643 node_ready.go:35] waiting up to 6m0s for node "addons-444927" to be "Ready" ...
I0210 12:33:19.399315 79643 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0210 12:33:19.399406 79643 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I0210 12:33:19.405130 79643 node_ready.go:49] node "addons-444927" has status "Ready":"True"
I0210 12:33:19.405157 79643 node_ready.go:38] duration metric: took 103.619571ms for node "addons-444927" to be "Ready" ...
I0210 12:33:19.405170 79643 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0210 12:33:19.501306 79643 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-pmclr" in "kube-system" namespace to be "Ready" ...
I0210 12:33:19.588953 79643 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.369376678s)
I0210 12:33:19.589045 79643 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
I0210 12:33:19.704489 79643 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0210 12:33:19.704581 79643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I0210 12:33:19.908163 79643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I0210 12:33:19.996801 79643 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0210 12:33:19.996832 79643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I0210 12:33:20.092965 79643 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-444927" context rescaled to 1 replicas
I0210 12:33:20.385014 79643 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0210 12:33:20.385116 79643 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I0210 12:33:20.399436 79643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0210 12:33:20.597102 79643 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0210 12:33:20.597130 79643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I0210 12:33:21.000145 79643 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0210 12:33:21.000175 79643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I0210 12:33:21.290846 79643 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0210 12:33:21.290879 79643 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0210 12:33:21.605576 79643 pod_ready.go:103] pod "coredns-668d6bf9bc-pmclr" in "kube-system" namespace has status "Ready":"False"
I0210 12:33:21.696402 79643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0210 12:33:23.890657 79643 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0210 12:33:23.890734 79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
I0210 12:33:23.918163 79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
I0210 12:33:24.007369 79643 pod_ready.go:103] pod "coredns-668d6bf9bc-pmclr" in "kube-system" namespace has status "Ready":"False"
I0210 12:33:24.310882 79643 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0210 12:33:24.410800 79643 addons.go:238] Setting addon gcp-auth=true in "addons-444927"
I0210 12:33:24.410897 79643 host.go:66] Checking if "addons-444927" exists ...
I0210 12:33:24.411360 79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
I0210 12:33:24.429471 79643 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I0210 12:33:24.429515 79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
I0210 12:33:24.445428 79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
I0210 12:33:26.014149 79643 pod_ready.go:103] pod "coredns-668d6bf9bc-pmclr" in "kube-system" namespace has status "Ready":"False"
I0210 12:33:27.013531 79643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.793016074s)
I0210 12:33:27.013704 79643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.618648268s)
I0210 12:33:27.013738 79643 addons.go:479] Verifying addon ingress=true in "addons-444927"
I0210 12:33:27.013748 79643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.526559257s)
I0210 12:33:27.013816 79643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (9.406045546s)
I0210 12:33:27.013887 79643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.319264024s)
I0210 12:33:27.013962 79643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (9.120554129s)
I0210 12:33:27.013788 79643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.527377786s)
I0210 12:33:27.014019 79643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.915185644s)
I0210 12:33:27.013797 79643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.515813648s)
I0210 12:33:27.014206 79643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.813996526s)
I0210 12:33:27.014225 79643 addons.go:479] Verifying addon metrics-server=true in "addons-444927"
I0210 12:33:27.014232 79643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.227346765s)
I0210 12:33:27.014250 79643 addons.go:479] Verifying addon registry=true in "addons-444927"
I0210 12:33:27.014273 79643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.106076426s)
I0210 12:33:27.014393 79643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.614928343s)
W0210 12:33:27.014422 79643 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0210 12:33:27.014445 79643 retry.go:31] will retry after 125.415761ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0210 12:33:27.015282 79643 out.go:177] * Verifying ingress addon...
I0210 12:33:27.015995 79643 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-444927 service yakd-dashboard -n yakd-dashboard
I0210 12:33:27.015999 79643 out.go:177] * Verifying registry addon...
I0210 12:33:27.017498 79643 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I0210 12:33:27.018367 79643 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0210 12:33:27.088833 79643 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0210 12:33:27.088859 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:27.089435 79643 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I0210 12:33:27.089456 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
W0210 12:33:27.094441 79643 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
I0210 12:33:27.140441 79643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0210 12:33:27.590601 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:27.590825 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:27.693815 79643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.997275868s)
I0210 12:33:27.693856 79643 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-444927"
I0210 12:33:27.694158 79643 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.264638761s)
I0210 12:33:27.695570 79643 out.go:177] * Verifying csi-hostpath-driver addon...
I0210 12:33:27.695570 79643 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
I0210 12:33:27.697882 79643 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0210 12:33:27.699542 79643 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
I0210 12:33:27.700672 79643 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0210 12:33:27.700723 79643 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0210 12:33:27.710105 79643 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0210 12:33:27.710129 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:27.803984 79643 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0210 12:33:27.804013 79643 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0210 12:33:27.902637 79643 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0210 12:33:27.902671 79643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I0210 12:33:27.999396 79643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0210 12:33:28.088439 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:28.088848 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:28.201773 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:28.507991 79643 pod_ready.go:103] pod "coredns-668d6bf9bc-pmclr" in "kube-system" namespace has status "Ready":"False"
I0210 12:33:28.521203 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:28.595952 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:28.701219 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:29.086216 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:29.086471 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:29.201718 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:29.288121 79643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.147627082s)
I0210 12:33:29.288181 79643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.288744229s)
I0210 12:33:29.289590 79643 addons.go:479] Verifying addon gcp-auth=true in "addons-444927"
I0210 12:33:29.292196 79643 out.go:177] * Verifying gcp-auth addon...
I0210 12:33:29.294748 79643 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0210 12:33:29.296932 79643 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0210 12:33:29.521089 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:29.521268 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:29.701446 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:30.021180 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:30.021327 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:30.201792 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:30.520708 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:30.520831 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:30.701768 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:31.006361 79643 pod_ready.go:103] pod "coredns-668d6bf9bc-pmclr" in "kube-system" namespace has status "Ready":"False"
I0210 12:33:31.021048 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:31.021143 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:31.201816 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:31.521436 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:31.521621 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:31.701460 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:32.021345 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:32.021556 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:32.202178 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:32.521440 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:32.521454 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:32.701544 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:33.020598 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:33.021124 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:33.200718 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:33.506074 79643 pod_ready.go:103] pod "coredns-668d6bf9bc-pmclr" in "kube-system" namespace has status "Ready":"False"
I0210 12:33:33.520186 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:33.521266 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:33.701201 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:34.020890 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:34.020924 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:34.200431 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:34.520747 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:34.520943 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:34.700856 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:35.020587 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:35.021288 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:35.201196 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:35.520793 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:35.521193 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:35.701233 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:36.006716 79643 pod_ready.go:103] pod "coredns-668d6bf9bc-pmclr" in "kube-system" namespace has status "Ready":"False"
I0210 12:33:36.020823 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:36.021083 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:36.201188 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:36.520428 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:36.521078 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:36.701095 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:37.020683 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:37.020850 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:37.201354 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:37.520816 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:37.520841 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:37.701611 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:38.020412 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:38.020924 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:38.200740 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:38.506568 79643 pod_ready.go:103] pod "coredns-668d6bf9bc-pmclr" in "kube-system" namespace has status "Ready":"False"
I0210 12:33:38.520881 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:38.521054 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:38.700836 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:39.020587 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:39.020678 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:39.201254 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:39.522118 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:39.522862 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:39.701373 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:40.021143 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:40.021176 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:40.200977 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:40.520795 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:40.520833 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:40.702040 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:41.005755 79643 pod_ready.go:103] pod "coredns-668d6bf9bc-pmclr" in "kube-system" namespace has status "Ready":"False"
I0210 12:33:41.020832 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:41.021187 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:41.201339 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:41.520423 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:41.521257 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:41.701615 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:42.020627 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:42.021014 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:42.201328 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:42.521340 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:42.521607 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:42.701589 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:43.007218 79643 pod_ready.go:103] pod "coredns-668d6bf9bc-pmclr" in "kube-system" namespace has status "Ready":"False"
I0210 12:33:43.021315 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:43.021536 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:43.201670 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:43.520647 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:43.521263 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:43.701443 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:44.021244 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:44.021264 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:44.205924 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:44.521094 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:44.521159 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:44.702041 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:45.020742 79643 pod_ready.go:103] pod "coredns-668d6bf9bc-pmclr" in "kube-system" namespace has status "Ready":"False"
I0210 12:33:45.021466 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:45.021593 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:45.201511 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:45.541041 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:45.541207 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:45.701881 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:46.020561 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:46.020830 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:46.201773 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:46.505883 79643 pod_ready.go:93] pod "coredns-668d6bf9bc-pmclr" in "kube-system" namespace has status "Ready":"True"
I0210 12:33:46.505905 79643 pod_ready.go:82] duration metric: took 27.004506937s for pod "coredns-668d6bf9bc-pmclr" in "kube-system" namespace to be "Ready" ...
I0210 12:33:46.505915 79643 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-zrfk6" in "kube-system" namespace to be "Ready" ...
I0210 12:33:46.507390 79643 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-zrfk6" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-zrfk6" not found
I0210 12:33:46.507419 79643 pod_ready.go:82] duration metric: took 1.498927ms for pod "coredns-668d6bf9bc-zrfk6" in "kube-system" namespace to be "Ready" ...
E0210 12:33:46.507429 79643 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-zrfk6" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-zrfk6" not found
I0210 12:33:46.507437 79643 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-444927" in "kube-system" namespace to be "Ready" ...
I0210 12:33:46.510591 79643 pod_ready.go:93] pod "etcd-addons-444927" in "kube-system" namespace has status "Ready":"True"
I0210 12:33:46.510606 79643 pod_ready.go:82] duration metric: took 3.164494ms for pod "etcd-addons-444927" in "kube-system" namespace to be "Ready" ...
I0210 12:33:46.510617 79643 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-444927" in "kube-system" namespace to be "Ready" ...
I0210 12:33:46.513595 79643 pod_ready.go:93] pod "kube-apiserver-addons-444927" in "kube-system" namespace has status "Ready":"True"
I0210 12:33:46.513611 79643 pod_ready.go:82] duration metric: took 2.987785ms for pod "kube-apiserver-addons-444927" in "kube-system" namespace to be "Ready" ...
I0210 12:33:46.513620 79643 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-444927" in "kube-system" namespace to be "Ready" ...
I0210 12:33:46.516906 79643 pod_ready.go:93] pod "kube-controller-manager-addons-444927" in "kube-system" namespace has status "Ready":"True"
I0210 12:33:46.516923 79643 pod_ready.go:82] duration metric: took 3.29807ms for pod "kube-controller-manager-addons-444927" in "kube-system" namespace to be "Ready" ...
I0210 12:33:46.516932 79643 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bhdzg" in "kube-system" namespace to be "Ready" ...
I0210 12:33:46.519735 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:46.520516 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:46.701580 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:46.703427 79643 pod_ready.go:93] pod "kube-proxy-bhdzg" in "kube-system" namespace has status "Ready":"True"
I0210 12:33:46.703449 79643 pod_ready.go:82] duration metric: took 186.511762ms for pod "kube-proxy-bhdzg" in "kube-system" namespace to be "Ready" ...
I0210 12:33:46.703460 79643 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-444927" in "kube-system" namespace to be "Ready" ...
I0210 12:33:47.021297 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:47.021408 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:47.104919 79643 pod_ready.go:93] pod "kube-scheduler-addons-444927" in "kube-system" namespace has status "Ready":"True"
I0210 12:33:47.104949 79643 pod_ready.go:82] duration metric: took 401.480944ms for pod "kube-scheduler-addons-444927" in "kube-system" namespace to be "Ready" ...
I0210 12:33:47.104965 79643 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-h5pb4" in "kube-system" namespace to be "Ready" ...
I0210 12:33:47.202288 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:47.520614 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:47.521096 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:47.701157 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:48.020992 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:48.021255 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:48.201491 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:48.521420 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:48.521462 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:48.701557 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:49.021749 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:49.021899 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:49.109785 79643 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-h5pb4" in "kube-system" namespace has status "Ready":"False"
I0210 12:33:49.201616 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:49.521166 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:49.521287 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:49.701550 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:50.122437 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:50.122692 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:50.201383 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:50.521274 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:50.521307 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:50.701526 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:51.020439 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:51.020841 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:51.110364 79643 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-h5pb4" in "kube-system" namespace has status "Ready":"False"
I0210 12:33:51.201443 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:51.520614 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:51.521154 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:51.701412 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:52.021074 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:52.021074 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:52.202311 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:52.521057 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:52.521078 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:52.701225 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:53.020076 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:53.020897 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:53.201012 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:53.520869 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:53.520945 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:53.609993 79643 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-h5pb4" in "kube-system" namespace has status "Ready":"False"
I0210 12:33:53.700838 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:54.021143 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:54.021173 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:54.200911 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:54.520493 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:54.521180 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:54.701640 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:55.020829 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:55.020948 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:55.200814 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:55.520544 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:55.521380 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:55.610124 79643 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-h5pb4" in "kube-system" namespace has status "Ready":"False"
I0210 12:33:55.701152 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:56.021953 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:56.022003 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:56.201857 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:56.520546 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:56.521042 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:56.701936 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:57.020505 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:57.021206 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:57.201800 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:57.521507 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:57.521792 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:57.701257 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:58.020752 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:58.020939 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:58.110670 79643 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-h5pb4" in "kube-system" namespace has status "Ready":"False"
I0210 12:33:58.201616 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:58.521297 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:58.521400 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:58.701120 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:59.020447 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:59.021033 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:59.201758 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:33:59.521566 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:33:59.521621 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:33:59.701504 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:00.021368 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:34:00.021432 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:34:00.201085 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:00.520496 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:34:00.520875 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:34:00.610100 79643 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-h5pb4" in "kube-system" namespace has status "Ready":"False"
I0210 12:34:00.701109 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:01.021536 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:34:01.021588 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:34:01.222775 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:01.521199 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:34:01.521268 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:34:01.609438 79643 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-h5pb4" in "kube-system" namespace has status "Ready":"True"
I0210 12:34:01.609458 79643 pod_ready.go:82] duration metric: took 14.504486382s for pod "nvidia-device-plugin-daemonset-h5pb4" in "kube-system" namespace to be "Ready" ...
I0210 12:34:01.609466 79643 pod_ready.go:39] duration metric: took 42.204282274s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0210 12:34:01.609490 79643 api_server.go:52] waiting for apiserver process to appear ...
I0210 12:34:01.609547 79643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0210 12:34:01.624528 79643 api_server.go:72] duration metric: took 44.823449986s to wait for apiserver process to appear ...
I0210 12:34:01.624561 79643 api_server.go:88] waiting for apiserver healthz status ...
I0210 12:34:01.624589 79643 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0210 12:34:01.630346 79643 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
ok
I0210 12:34:01.631435 79643 api_server.go:141] control plane version: v1.32.1
I0210 12:34:01.631463 79643 api_server.go:131] duration metric: took 6.893671ms to wait for apiserver health ...
I0210 12:34:01.631472 79643 system_pods.go:43] waiting for kube-system pods to appear ...
I0210 12:34:01.635155 79643 system_pods.go:59] 19 kube-system pods found
I0210 12:34:01.635197 79643 system_pods.go:61] "amd-gpu-device-plugin-tffg2" [cbfc6cf0-103c-44d4-85d7-bb02305be0fb] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I0210 12:34:01.635206 79643 system_pods.go:61] "coredns-668d6bf9bc-pmclr" [fdb0ba16-77a2-4571-9a91-517bcfa86336] Running
I0210 12:34:01.635214 79643 system_pods.go:61] "csi-hostpath-attacher-0" [c96bbb2d-25b5-49cb-ac3d-0dfa740a57dc] Running
I0210 12:34:01.635222 79643 system_pods.go:61] "csi-hostpath-resizer-0" [392cfd8f-12f0-46ab-b74d-47d2d30396c4] Running
I0210 12:34:01.635230 79643 system_pods.go:61] "csi-hostpathplugin-8sfhb" [4efb0c0d-48cf-4a8c-bd48-7509139a7c09] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0210 12:34:01.635242 79643 system_pods.go:61] "etcd-addons-444927" [7e441f46-31ed-4b8e-83ef-470541260b8b] Running
I0210 12:34:01.635248 79643 system_pods.go:61] "kindnet-b2qzd" [3d4a13b3-ab3e-4626-806a-b5ed71164ce3] Running
I0210 12:34:01.635253 79643 system_pods.go:61] "kube-apiserver-addons-444927" [c4a04cf9-cdc3-4ae7-b94e-def5f025c9a0] Running
I0210 12:34:01.635261 79643 system_pods.go:61] "kube-controller-manager-addons-444927" [137bde99-d8e3-4d8f-803a-fa7d22ca2569] Running
I0210 12:34:01.635267 79643 system_pods.go:61] "kube-ingress-dns-minikube" [3a711174-7f5b-48d3-81d5-d11c8305f7e8] Running
I0210 12:34:01.635275 79643 system_pods.go:61] "kube-proxy-bhdzg" [cd096b8d-0142-4c4a-bb11-eda48b1ef5d7] Running
I0210 12:34:01.635282 79643 system_pods.go:61] "kube-scheduler-addons-444927" [722f34f2-250a-4f48-8479-774786f34499] Running
I0210 12:34:01.635290 79643 system_pods.go:61] "metrics-server-7fbb699795-9rzwp" [18f2c184-138a-4ae6-9b10-1f55f0ffe77d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0210 12:34:01.635299 79643 system_pods.go:61] "nvidia-device-plugin-daemonset-h5pb4" [3346d1a2-d520-442b-8349-6a8ecaea1a6f] Running
I0210 12:34:01.635305 79643 system_pods.go:61] "registry-6c88467877-gh4sc" [42010757-f6a0-42bd-af45-d200619f078b] Running
I0210 12:34:01.635316 79643 system_pods.go:61] "registry-proxy-lkxgg" [48863c7e-8f22-4c47-a211-3f269092501f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0210 12:34:01.635322 79643 system_pods.go:61] "snapshot-controller-68b874b76f-9d48t" [3a3cb3ea-bfa1-43c4-b04a-298b979dab6e] Running
I0210 12:34:01.635331 79643 system_pods.go:61] "snapshot-controller-68b874b76f-l9jq9" [8e900a1f-21ea-4c89-b7b7-ae34ba60446d] Running
I0210 12:34:01.635336 79643 system_pods.go:61] "storage-provisioner" [09ac7bfd-a4d4-4e2d-a1fc-1099e247efad] Running
I0210 12:34:01.635347 79643 system_pods.go:74] duration metric: took 3.866998ms to wait for pod list to return data ...
I0210 12:34:01.635357 79643 default_sa.go:34] waiting for default service account to be created ...
I0210 12:34:01.637756 79643 default_sa.go:45] found service account: "default"
I0210 12:34:01.637777 79643 default_sa.go:55] duration metric: took 2.411649ms for default service account to be created ...
I0210 12:34:01.637786 79643 system_pods.go:116] waiting for k8s-apps to be running ...
I0210 12:34:01.640908 79643 system_pods.go:86] 19 kube-system pods found
I0210 12:34:01.640950 79643 system_pods.go:89] "amd-gpu-device-plugin-tffg2" [cbfc6cf0-103c-44d4-85d7-bb02305be0fb] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I0210 12:34:01.640962 79643 system_pods.go:89] "coredns-668d6bf9bc-pmclr" [fdb0ba16-77a2-4571-9a91-517bcfa86336] Running
I0210 12:34:01.640972 79643 system_pods.go:89] "csi-hostpath-attacher-0" [c96bbb2d-25b5-49cb-ac3d-0dfa740a57dc] Running
I0210 12:34:01.640978 79643 system_pods.go:89] "csi-hostpath-resizer-0" [392cfd8f-12f0-46ab-b74d-47d2d30396c4] Running
I0210 12:34:01.640993 79643 system_pods.go:89] "csi-hostpathplugin-8sfhb" [4efb0c0d-48cf-4a8c-bd48-7509139a7c09] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0210 12:34:01.641003 79643 system_pods.go:89] "etcd-addons-444927" [7e441f46-31ed-4b8e-83ef-470541260b8b] Running
I0210 12:34:01.641014 79643 system_pods.go:89] "kindnet-b2qzd" [3d4a13b3-ab3e-4626-806a-b5ed71164ce3] Running
I0210 12:34:01.641024 79643 system_pods.go:89] "kube-apiserver-addons-444927" [c4a04cf9-cdc3-4ae7-b94e-def5f025c9a0] Running
I0210 12:34:01.641034 79643 system_pods.go:89] "kube-controller-manager-addons-444927" [137bde99-d8e3-4d8f-803a-fa7d22ca2569] Running
I0210 12:34:01.641046 79643 system_pods.go:89] "kube-ingress-dns-minikube" [3a711174-7f5b-48d3-81d5-d11c8305f7e8] Running
I0210 12:34:01.641055 79643 system_pods.go:89] "kube-proxy-bhdzg" [cd096b8d-0142-4c4a-bb11-eda48b1ef5d7] Running
I0210 12:34:01.641061 79643 system_pods.go:89] "kube-scheduler-addons-444927" [722f34f2-250a-4f48-8479-774786f34499] Running
I0210 12:34:01.641070 79643 system_pods.go:89] "metrics-server-7fbb699795-9rzwp" [18f2c184-138a-4ae6-9b10-1f55f0ffe77d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0210 12:34:01.641080 79643 system_pods.go:89] "nvidia-device-plugin-daemonset-h5pb4" [3346d1a2-d520-442b-8349-6a8ecaea1a6f] Running
I0210 12:34:01.641087 79643 system_pods.go:89] "registry-6c88467877-gh4sc" [42010757-f6a0-42bd-af45-d200619f078b] Running
I0210 12:34:01.641098 79643 system_pods.go:89] "registry-proxy-lkxgg" [48863c7e-8f22-4c47-a211-3f269092501f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0210 12:34:01.641107 79643 system_pods.go:89] "snapshot-controller-68b874b76f-9d48t" [3a3cb3ea-bfa1-43c4-b04a-298b979dab6e] Running
I0210 12:34:01.641116 79643 system_pods.go:89] "snapshot-controller-68b874b76f-l9jq9" [8e900a1f-21ea-4c89-b7b7-ae34ba60446d] Running
I0210 12:34:01.641126 79643 system_pods.go:89] "storage-provisioner" [09ac7bfd-a4d4-4e2d-a1fc-1099e247efad] Running
I0210 12:34:01.641140 79643 system_pods.go:126] duration metric: took 3.346346ms to wait for k8s-apps to be running ...
I0210 12:34:01.641153 79643 system_svc.go:44] waiting for kubelet service to be running ....
I0210 12:34:01.641211 79643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0210 12:34:01.655042 79643 system_svc.go:56] duration metric: took 13.87799ms WaitForService to wait for kubelet
I0210 12:34:01.655074 79643 kubeadm.go:582] duration metric: took 44.854004154s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0210 12:34:01.655105 79643 node_conditions.go:102] verifying NodePressure condition ...
I0210 12:34:01.657661 79643 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I0210 12:34:01.657704 79643 node_conditions.go:123] node cpu capacity is 8
I0210 12:34:01.657721 79643 node_conditions.go:105] duration metric: took 2.610448ms to run NodePressure ...
I0210 12:34:01.657738 79643 start.go:241] waiting for startup goroutines ...
I0210 12:34:01.735138 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:02.031635 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:34:02.031727 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:34:02.232204 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:02.521733 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:34:02.521750 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:34:02.701436 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:03.021000 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:34:03.021316 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:34:03.221051 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:03.520832 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:34:03.521330 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:34:03.701048 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:04.020854 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:34:04.021027 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:34:04.201724 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:04.521464 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:34:04.521589 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:34:04.701770 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:05.021539 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:34:05.021601 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:34:05.201546 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:05.520681 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:34:05.520800 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:34:05.702086 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:06.020829 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:34:06.021247 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:34:06.201270 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:06.521463 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:34:06.521563 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:34:06.701406 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:07.020414 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:34:07.021068 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:34:07.201339 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:07.520632 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:34:07.521178 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:34:07.701209 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:08.021554 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:34:08.021637 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:34:08.201411 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:08.521343 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:34:08.521399 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:34:08.701848 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:09.021283 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:34:09.021334 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:34:09.200955 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:09.520738 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:34:09.521147 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:34:09.701197 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:10.021409 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0210 12:34:10.021451 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:34:10.201998 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:10.521456 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:34:10.521721 79643 kapi.go:107] duration metric: took 43.503349444s to wait for kubernetes.io/minikube-addons=registry ...
I0210 12:34:10.702483 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:11.021247 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:34:11.201330 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:11.520737 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:34:11.701678 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:12.020696 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:34:12.201706 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:12.520827 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:34:12.700908 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:13.021038 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:34:13.200709 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:13.521854 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:34:13.701826 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:14.020645 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:34:14.201953 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:14.520854 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:34:14.701821 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:15.020508 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:34:15.201789 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:15.520791 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:34:15.701560 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:16.021369 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:34:16.201775 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:16.521226 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:34:16.701150 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:17.020725 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:34:17.201578 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:17.521254 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:34:17.700729 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:18.021429 79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0210 12:34:18.201576 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:18.521712 79643 kapi.go:107] duration metric: took 51.50420804s to wait for app.kubernetes.io/name=ingress-nginx ...
I0210 12:34:18.701598 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:19.201146 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:19.702016 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:20.204254 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:20.701640 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:21.201703 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:21.701095 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:22.200909 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0210 12:34:22.701644 79643 kapi.go:107] duration metric: took 55.00376107s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0210 12:34:52.297870 79643 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0210 12:34:52.297895 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0210 12:34:52.797993 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0210 12:34:53.297526 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0210 12:34:53.797559 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0210 12:34:54.297557 79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0210 12:34:54.798550 79643 kapi.go:107] duration metric: took 1m25.503798382s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0210 12:34:54.800386 79643 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-444927 cluster.
I0210 12:34:54.801893 79643 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0210 12:34:54.803268 79643 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0210 12:34:54.804681 79643 out.go:177] * Enabled addons: cloud-spanner, volcano, nvidia-device-plugin, amd-gpu-device-plugin, storage-provisioner, inspektor-gadget, ingress-dns, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
I0210 12:34:54.806202 79643 addons.go:514] duration metric: took 1m38.005101774s for enable addons: enabled=[cloud-spanner volcano nvidia-device-plugin amd-gpu-device-plugin storage-provisioner inspektor-gadget ingress-dns metrics-server yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
I0210 12:34:54.806244 79643 start.go:246] waiting for cluster config update ...
I0210 12:34:54.806272 79643 start.go:255] writing updated cluster config ...
I0210 12:34:54.806535 79643 ssh_runner.go:195] Run: rm -f paused
I0210 12:34:54.857164 79643 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
I0210 12:34:54.858976 79643 out.go:177] * Done! kubectl is now configured to use "addons-444927" cluster and "default" namespace by default
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
d8ca48ba1c44f 56cc512116c8f 3 minutes ago Running busybox 0 df487e63ae9d5 busybox
335a0ee58f465 ee44bc2368033 4 minutes ago Running controller 0 2f287e6d0edf2 ingress-nginx-controller-56d7c84fd4-zbfkg
0cec2cbf4135a e16d1e3a10667 4 minutes ago Running local-path-provisioner 0 3d3c2ca75f484 local-path-provisioner-76f89f99b5-hllh6
a870218b99697 a62eeff05ba51 5 minutes ago Exited patch 2 502eca8d57fe2 ingress-nginx-admission-patch-kwk2g
85e22ccf2fd29 a62eeff05ba51 5 minutes ago Exited create 0 097560c943fa3 ingress-nginx-admission-create-zsvgr
8265f7e56e2bb c69fa2e9cbf5f 5 minutes ago Running coredns 0 7ebb694aa2770 coredns-668d6bf9bc-pmclr
13fb15fed8d27 30dd67412fdea 5 minutes ago Running minikube-ingress-dns 0 fe96d89ee65c6 kube-ingress-dns-minikube
bb21e3efe3fc2 d300845f67aeb 5 minutes ago Running kindnet-cni 0 c3cf0810b88ff kindnet-b2qzd
7bef7a777b3e3 6e38f40d628db 5 minutes ago Running storage-provisioner 0 d165d6f754cc7 storage-provisioner
4e15a64a6e3a3 e29f9c7391fd9 5 minutes ago Running kube-proxy 0 b0f73a109986b kube-proxy-bhdzg
6b5511caeb4b6 95c0bda56fc4d 5 minutes ago Running kube-apiserver 0 5edfcf3c6599b kube-apiserver-addons-444927
3689951b3e8e3 a9e7e6b294baf 5 minutes ago Running etcd 0 07e68c6d1f553 etcd-addons-444927
f0141e94893c8 2b0d6572d062c 5 minutes ago Running kube-scheduler 0 c83ce2f92899c kube-scheduler-addons-444927
285b1d1cd9a34 019ee182b58e2 5 minutes ago Running kube-controller-manager 0 fc5b57769b2c7 kube-controller-manager-addons-444927
==> containerd <==
Feb 10 12:37:12 addons-444927 containerd[860]: time="2025-02-10T12:37:12.279550754Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"204b3fdd994b9b12ae512dae5aa0cd650d6a6001000c83219cd4d35c4491f59d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 10 12:37:12 addons-444927 containerd[860]: time="2025-02-10T12:37:12.279621064Z" level=info msg="RemovePodSandbox \"204b3fdd994b9b12ae512dae5aa0cd650d6a6001000c83219cd4d35c4491f59d\" returns successfully"
Feb 10 12:37:12 addons-444927 containerd[860]: time="2025-02-10T12:37:12.280134841Z" level=info msg="StopPodSandbox for \"863a65fc406e5765a824682d8117cea9e2d857f8d106b33c8c455c602557bbf5\""
Feb 10 12:37:12 addons-444927 containerd[860]: time="2025-02-10T12:37:12.287306406Z" level=info msg="TearDown network for sandbox \"863a65fc406e5765a824682d8117cea9e2d857f8d106b33c8c455c602557bbf5\" successfully"
Feb 10 12:37:12 addons-444927 containerd[860]: time="2025-02-10T12:37:12.287335074Z" level=info msg="StopPodSandbox for \"863a65fc406e5765a824682d8117cea9e2d857f8d106b33c8c455c602557bbf5\" returns successfully"
Feb 10 12:37:12 addons-444927 containerd[860]: time="2025-02-10T12:37:12.287818887Z" level=info msg="RemovePodSandbox for \"863a65fc406e5765a824682d8117cea9e2d857f8d106b33c8c455c602557bbf5\""
Feb 10 12:37:12 addons-444927 containerd[860]: time="2025-02-10T12:37:12.287851476Z" level=info msg="Forcibly stopping sandbox \"863a65fc406e5765a824682d8117cea9e2d857f8d106b33c8c455c602557bbf5\""
Feb 10 12:37:12 addons-444927 containerd[860]: time="2025-02-10T12:37:12.295181893Z" level=info msg="TearDown network for sandbox \"863a65fc406e5765a824682d8117cea9e2d857f8d106b33c8c455c602557bbf5\" successfully"
Feb 10 12:37:12 addons-444927 containerd[860]: time="2025-02-10T12:37:12.299442048Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"863a65fc406e5765a824682d8117cea9e2d857f8d106b33c8c455c602557bbf5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 10 12:37:12 addons-444927 containerd[860]: time="2025-02-10T12:37:12.299511244Z" level=info msg="RemovePodSandbox \"863a65fc406e5765a824682d8117cea9e2d857f8d106b33c8c455c602557bbf5\" returns successfully"
Feb 10 12:37:25 addons-444927 containerd[860]: time="2025-02-10T12:37:25.694912719Z" level=info msg="PullImage \"docker.io/nginx:alpine\""
Feb 10 12:37:25 addons-444927 containerd[860]: time="2025-02-10T12:37:25.696854236Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Feb 10 12:37:25 addons-444927 containerd[860]: time="2025-02-10T12:37:25.958308893Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Feb 10 12:37:26 addons-444927 containerd[860]: time="2025-02-10T12:37:26.573754007Z" level=error msg="PullImage \"docker.io/nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b471bb609adc83f73c2d95148cf1bd683408739a3c09c0afc666ea2af0037aef: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
Feb 10 12:37:26 addons-444927 containerd[860]: time="2025-02-10T12:37:26.573812308Z" level=info msg="stop pulling image docker.io/library/nginx:alpine: active requests=0, bytes read=11042"
Feb 10 12:37:38 addons-444927 containerd[860]: time="2025-02-10T12:37:38.694542017Z" level=info msg="PullImage \"busybox:stable\""
Feb 10 12:37:38 addons-444927 containerd[860]: time="2025-02-10T12:37:38.696619550Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Feb 10 12:37:39 addons-444927 containerd[860]: time="2025-02-10T12:37:39.070245317Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Feb 10 12:37:39 addons-444927 containerd[860]: time="2025-02-10T12:37:39.682588005Z" level=error msg="PullImage \"busybox:stable\" failed" error="failed to pull and unpack image \"docker.io/library/busybox:stable\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:71b79694b71639e633452f57fd9de40595d524de308349218d9a6a144b40be02: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
Feb 10 12:37:39 addons-444927 containerd[860]: time="2025-02-10T12:37:39.682645631Z" level=info msg="stop pulling image docker.io/library/busybox:stable: active requests=0, bytes read=11054"
Feb 10 12:38:51 addons-444927 containerd[860]: time="2025-02-10T12:38:51.694814666Z" level=info msg="PullImage \"docker.io/nginx:alpine\""
Feb 10 12:38:51 addons-444927 containerd[860]: time="2025-02-10T12:38:51.696890381Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Feb 10 12:38:51 addons-444927 containerd[860]: time="2025-02-10T12:38:51.972534534Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Feb 10 12:38:52 addons-444927 containerd[860]: time="2025-02-10T12:38:52.754437670Z" level=error msg="PullImage \"docker.io/nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
Feb 10 12:38:52 addons-444927 containerd[860]: time="2025-02-10T12:38:52.754500375Z" level=info msg="stop pulling image docker.io/library/nginx:alpine: active requests=0, bytes read=21399"
==> coredns [8265f7e56e2bb889b5828efc038b36fa8cc3c87eb1f2499ab085aa4454899dcc] <==
[INFO] 10.244.0.16:46096 - 49638 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000144415s
[INFO] 10.244.0.16:53262 - 33490 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.00356957s
[INFO] 10.244.0.16:53262 - 33856 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.003971905s
[INFO] 10.244.0.16:42348 - 26371 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004565804s
[INFO] 10.244.0.16:42348 - 26084 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005381761s
[INFO] 10.244.0.16:48195 - 10688 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005672715s
[INFO] 10.244.0.16:48195 - 10353 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.007116525s
[INFO] 10.244.0.16:43289 - 1126 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000110088s
[INFO] 10.244.0.16:43289 - 1426 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000169879s
[INFO] 10.244.0.26:57046 - 6317 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000184454s
[INFO] 10.244.0.26:44269 - 32005 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00027705s
[INFO] 10.244.0.26:41952 - 49104 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000123825s
[INFO] 10.244.0.26:51853 - 47082 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00017984s
[INFO] 10.244.0.26:41441 - 20398 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000126396s
[INFO] 10.244.0.26:57457 - 62700 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000125438s
[INFO] 10.244.0.26:46468 - 22650 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.006806491s
[INFO] 10.244.0.26:59530 - 24465 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.008673079s
[INFO] 10.244.0.26:47549 - 30380 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006622696s
[INFO] 10.244.0.26:49073 - 7764 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007343908s
[INFO] 10.244.0.26:55471 - 59180 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005630425s
[INFO] 10.244.0.26:57911 - 54108 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006281675s
[INFO] 10.244.0.26:36166 - 30819 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000715616s
[INFO] 10.244.0.26:37151 - 17899 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000758884s
[INFO] 10.244.0.31:55875 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000290764s
[INFO] 10.244.0.31:34599 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00016092s
==> describe nodes <==
Name: addons-444927
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-444927
kubernetes.io/os=linux
minikube.k8s.io/commit=7d7e9539cf1c3abd6114cdafa89e43b830da4e04
minikube.k8s.io/name=addons-444927
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_02_10T12_33_12_0700
minikube.k8s.io/version=v1.35.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-444927
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 10 Feb 2025 12:33:09 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-444927
AcquireTime: <unset>
RenewTime: Mon, 10 Feb 2025 12:38:58 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 10 Feb 2025 12:36:15 +0000 Mon, 10 Feb 2025 12:33:07 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 10 Feb 2025 12:36:15 +0000 Mon, 10 Feb 2025 12:33:07 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 10 Feb 2025 12:36:15 +0000 Mon, 10 Feb 2025 12:33:07 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 10 Feb 2025 12:36:15 +0000 Mon, 10 Feb 2025 12:33:09 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: addons-444927
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859368Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859368Ki
pods: 110
System Info:
Machine ID: 84d0349545ea4184a13e466359bce586
System UUID: 790b434d-ab01-481d-9c8e-24468aad0754
Boot ID: 1d7cad77-75d7-418d-a590-e8096751a144
Kernel Version: 5.15.0-1075-gcp
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.7.24
Kubelet Version: v1.32.1
Kube-Proxy Version: v1.32.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (14 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m30s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m59s
default test-local-path 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m7s
ingress-nginx ingress-nginx-controller-56d7c84fd4-zbfkg 100m (1%) 0 (0%) 90Mi (0%) 0 (0%) 5m39s
kube-system coredns-668d6bf9bc-pmclr 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 5m47s
kube-system etcd-addons-444927 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 5m52s
kube-system kindnet-b2qzd 100m (1%) 100m (1%) 50Mi (0%) 50Mi (0%) 5m47s
kube-system kube-apiserver-addons-444927 250m (3%) 0 (0%) 0 (0%) 0 (0%) 5m52s
kube-system kube-controller-manager-addons-444927 200m (2%) 0 (0%) 0 (0%) 0 (0%) 5m52s
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m43s
kube-system kube-proxy-bhdzg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m47s
kube-system kube-scheduler-addons-444927 100m (1%) 0 (0%) 0 (0%) 0 (0%) 5m52s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m42s
local-path-storage local-path-provisioner-76f89f99b5-hllh6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m42s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (11%) 100m (1%)
memory 310Mi (0%) 220Mi (0%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 5m42s kube-proxy
Normal Starting 5m52s kubelet Starting kubelet.
Warning CgroupV1 5m52s kubelet cgroup v1 support is in maintenance mode, please migrate to cgroup v2
Normal NodeAllocatableEnforced 5m52s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 5m52s kubelet Node addons-444927 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m52s kubelet Node addons-444927 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m52s kubelet Node addons-444927 status is now: NodeHasSufficientPID
Normal RegisteredNode 5m48s node-controller Node addons-444927 event: Registered Node addons-444927 in Controller
==> dmesg <==
[Feb10 09:17] #2
[ +0.001427] #3
[ +0.000000] #4
[ +0.003161] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
[ +0.003164] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
[ +0.002021] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
[ +0.002123] #5
[ +0.000751] #6
[ +0.000811] #7
[ +0.060730] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +0.448106] i8042: Warning: Keylock active
[ +0.009792] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[ +0.004111] platform eisa.0: EISA: Cannot allocate resource for mainboard
[ +0.001792] platform eisa.0: Cannot allocate resource for EISA slot 1
[ +0.002113] platform eisa.0: Cannot allocate resource for EISA slot 2
[ +0.001740] platform eisa.0: Cannot allocate resource for EISA slot 3
[ +0.000002] platform eisa.0: Cannot allocate resource for EISA slot 4
[ +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 5
[ +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 6
[ +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 7
[ +0.000000] platform eisa.0: Cannot allocate resource for EISA slot 8
[ +0.629359] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[ +0.026636] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
[ +7.129242] kauditd_printk_skb: 46 callbacks suppressed
==> etcd [3689951b3e8e3c7756de3ba03de57b66bad31a4b4dc5540700134f77bc24fe01] <==
{"level":"info","ts":"2025-02-10T12:33:07.513806Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
{"level":"info","ts":"2025-02-10T12:33:07.513821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
{"level":"info","ts":"2025-02-10T12:33:07.514744Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2025-02-10T12:33:07.515473Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2025-02-10T12:33:07.515473Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-444927 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
{"level":"info","ts":"2025-02-10T12:33:07.515499Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2025-02-10T12:33:07.515741Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2025-02-10T12:33:07.515777Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2025-02-10T12:33:07.515973Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
{"level":"info","ts":"2025-02-10T12:33:07.516048Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2025-02-10T12:33:07.516076Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2025-02-10T12:33:07.516352Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2025-02-10T12:33:07.516725Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2025-02-10T12:33:07.517444Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2025-02-10T12:33:07.517509Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
{"level":"info","ts":"2025-02-10T12:33:44.201892Z","caller":"traceutil/trace.go:171","msg":"trace[199568268] transaction","detail":"{read_only:false; response_revision:1025; number_of_response:1; }","duration":"127.850852ms","start":"2025-02-10T12:33:44.074019Z","end":"2025-02-10T12:33:44.201870Z","steps":["trace[199568268] 'process raft request' (duration: 126.917033ms)"],"step_count":1}
{"level":"warn","ts":"2025-02-10T12:33:50.120765Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.804057ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"warn","ts":"2025-02-10T12:33:50.120851Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.923048ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-02-10T12:33:50.120861Z","caller":"traceutil/trace.go:171","msg":"trace[10515872] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1088; }","duration":"100.953991ms","start":"2025-02-10T12:33:50.019893Z","end":"2025-02-10T12:33:50.120847Z","steps":["trace[10515872] 'range keys from in-memory index tree' (duration: 100.737113ms)"],"step_count":1}
{"level":"info","ts":"2025-02-10T12:33:50.120880Z","caller":"traceutil/trace.go:171","msg":"trace[1104687145] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1088; }","duration":"100.977882ms","start":"2025-02-10T12:33:50.019893Z","end":"2025-02-10T12:33:50.120871Z","steps":["trace[1104687145] 'range keys from in-memory index tree' (duration: 100.853237ms)"],"step_count":1}
{"level":"info","ts":"2025-02-10T12:35:23.415346Z","caller":"traceutil/trace.go:171","msg":"trace[748944253] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1540; }","duration":"204.198818ms","start":"2025-02-10T12:35:23.211128Z","end":"2025-02-10T12:35:23.415326Z","steps":["trace[748944253] 'process raft request' (duration: 173.949813ms)","trace[748944253] 'compare' (duration: 29.964626ms)"],"step_count":2}
{"level":"info","ts":"2025-02-10T12:35:23.415401Z","caller":"traceutil/trace.go:171","msg":"trace[1248621682] linearizableReadLoop","detail":"{readStateIndex:1590; appliedIndex:1589; }","duration":"203.708869ms","start":"2025-02-10T12:35:23.211683Z","end":"2025-02-10T12:35:23.415392Z","steps":["trace[1248621682] 'read index received' (duration: 173.402762ms)","trace[1248621682] 'applied index is now lower than readState.Index' (duration: 30.305504ms)"],"step_count":2}
{"level":"info","ts":"2025-02-10T12:35:23.415356Z","caller":"traceutil/trace.go:171","msg":"trace[1667749101] transaction","detail":"{read_only:false; response_revision:1541; number_of_response:1; }","duration":"202.520539ms","start":"2025-02-10T12:35:23.212817Z","end":"2025-02-10T12:35:23.415338Z","steps":["trace[1667749101] 'process raft request' (duration: 202.440173ms)"],"step_count":1}
{"level":"warn","ts":"2025-02-10T12:35:23.415702Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"203.993785ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/batch.volcano.sh/jobs/my-volcano/test-job\" limit:1 ","response":"range_response_count:1 size:1709"}
{"level":"info","ts":"2025-02-10T12:35:23.415734Z","caller":"traceutil/trace.go:171","msg":"trace[727161755] range","detail":"{range_begin:/registry/batch.volcano.sh/jobs/my-volcano/test-job; range_end:; response_count:1; response_revision:1541; }","duration":"204.061202ms","start":"2025-02-10T12:35:23.211664Z","end":"2025-02-10T12:35:23.415726Z","steps":["trace[727161755] 'agreement among raft nodes before linearized reading' (duration: 203.933612ms)"],"step_count":1}
==> kernel <==
12:39:03 up 3:21, 0 users, load average: 0.19, 0.48, 0.27
Linux addons-444927 5.15.0-1075-gcp #84~20.04.1-Ubuntu SMP Thu Jan 16 20:44:47 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kindnet [bb21e3efe3fc23c6809548162b3e50334b811be602da16e741419cc39d3a6a5f] <==
I0210 12:36:57.688967 1 main.go:301] handling current node
I0210 12:37:07.688561 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I0210 12:37:07.688605 1 main.go:301] handling current node
I0210 12:37:17.685738 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I0210 12:37:17.685783 1 main.go:301] handling current node
I0210 12:37:27.685670 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I0210 12:37:27.685727 1 main.go:301] handling current node
I0210 12:37:37.689658 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I0210 12:37:37.689696 1 main.go:301] handling current node
I0210 12:37:47.687617 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I0210 12:37:47.687665 1 main.go:301] handling current node
I0210 12:37:57.685176 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I0210 12:37:57.685218 1 main.go:301] handling current node
I0210 12:38:07.688149 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I0210 12:38:07.688184 1 main.go:301] handling current node
I0210 12:38:17.692833 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I0210 12:38:17.692880 1 main.go:301] handling current node
I0210 12:38:27.685858 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I0210 12:38:27.685897 1 main.go:301] handling current node
I0210 12:38:37.685639 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I0210 12:38:37.685683 1 main.go:301] handling current node
I0210 12:38:47.694485 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I0210 12:38:47.694543 1 main.go:301] handling current node
I0210 12:38:57.693675 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I0210 12:38:57.693728 1 main.go:301] handling current node
==> kube-apiserver [6b5511caeb4b64a1e5025cdeeac686e0b5c81a0cbd9e5527f0b21e5f070a8cba] <==
W0210 12:35:24.617006 1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
W0210 12:35:24.703131 1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
W0210 12:35:24.791335 1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
W0210 12:35:25.091032 1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
W0210 12:35:25.390330 1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
E0210 12:35:40.784499 1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37964: use of closed network connection
E0210 12:35:40.939778 1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37988: use of closed network connection
I0210 12:35:50.508707 1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.0.255"}
I0210 12:36:04.788645 1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
I0210 12:36:04.964085 1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.248.3"}
I0210 12:36:07.720554 1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
W0210 12:36:08.835857 1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
I0210 12:36:16.253022 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
I0210 12:36:35.251817 1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
I0210 12:37:00.602649 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0210 12:37:00.602701 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0210 12:37:00.616758 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0210 12:37:00.616819 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0210 12:37:00.629321 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0210 12:37:00.629369 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0210 12:37:00.640167 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0210 12:37:00.640206 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W0210 12:37:01.622485 1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W0210 12:37:01.640686 1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
W0210 12:37:01.791532 1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
==> kube-controller-manager [285b1d1cd9a34f02e87f67f815e46e3710a9f2a4e94e679386dc52edfd107381] <==
E0210 12:38:41.295382 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0210 12:38:41.962468 1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
E0210 12:38:41.963326 1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="scheduling.volcano.sh/v1beta1, Resource=podgroups"
W0210 12:38:41.964156 1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0210 12:38:41.964182 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0210 12:38:42.136263 1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
E0210 12:38:42.137120 1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="flow.volcano.sh/v1alpha1, Resource=jobtemplates"
W0210 12:38:42.137895 1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0210 12:38:42.137924 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0210 12:38:47.850875 1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
E0210 12:38:47.851724 1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="nodeinfo.volcano.sh/v1alpha1, Resource=numatopologies"
W0210 12:38:47.852534 1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0210 12:38:47.852565 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0210 12:38:53.980528 1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
E0210 12:38:53.981440 1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="batch.volcano.sh/v1alpha1, Resource=jobs"
W0210 12:38:53.982202 1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0210 12:38:53.982233 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0210 12:38:55.172891 1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
E0210 12:38:55.173747 1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
W0210 12:38:55.174499 1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0210 12:38:55.174529 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0210 12:38:55.347791 1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
E0210 12:38:55.348672 1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
W0210 12:38:55.349497 1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0210 12:38:55.349527 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
==> kube-proxy [4e15a64a6e3a3cbb2b69641a157a914bffeef73fd8f8bda49180cdb370fad050] <==
I0210 12:33:19.691152 1 server_linux.go:66] "Using iptables proxy"
I0210 12:33:20.503292 1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
E0210 12:33:20.503369 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0210 12:33:20.888874 1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0210 12:33:20.888940 1 server_linux.go:170] "Using iptables Proxier"
I0210 12:33:20.892705 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0210 12:33:20.893247 1 server.go:497] "Version info" version="v1.32.1"
I0210 12:33:20.893262 1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0210 12:33:20.895668 1 config.go:105] "Starting endpoint slice config controller"
I0210 12:33:20.895697 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0210 12:33:20.895793 1 config.go:199] "Starting service config controller"
I0210 12:33:20.895800 1 shared_informer.go:313] Waiting for caches to sync for service config
I0210 12:33:20.896238 1 config.go:329] "Starting node config controller"
I0210 12:33:20.896247 1 shared_informer.go:313] Waiting for caches to sync for node config
I0210 12:33:20.996013 1 shared_informer.go:320] Caches are synced for service config
I0210 12:33:20.996059 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0210 12:33:20.998260 1 shared_informer.go:320] Caches are synced for node config
==> kube-scheduler [f0141e94893c804b94acf56c906d0009941f8ca8333aa34efcfd459e91e885f0] <==
W0210 12:33:09.209806 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0210 12:33:09.210084 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
E0210 12:33:09.210099 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0210 12:33:09.209783 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0210 12:33:09.210102 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
E0210 12:33:09.210118 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0210 12:33:09.209896 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0210 12:33:09.210141 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0210 12:33:09.210008 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
E0210 12:33:09.210161 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0210 12:33:10.055355 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0210 12:33:10.055412 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
W0210 12:33:10.058582 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0210 12:33:10.058618 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0210 12:33:10.153217 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0210 12:33:10.153255 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0210 12:33:10.199509 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0210 12:33:10.199545 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0210 12:33:10.236533 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W0210 12:33:10.236581 1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0210 12:33:10.236581 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
E0210 12:33:10.236598 1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
W0210 12:33:10.347106 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0210 12:33:10.347146 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
I0210 12:33:12.207728 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Feb 10 12:37:23 addons-444927 kubelet[1601]: E0210 12:37:23.694275 1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:71b79694b71639e633452f57fd9de40595d524de308349218d9a6a144b40be02: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="99e0a41e-dea7-4fc3-a083-fa0680179d33"
Feb 10 12:37:26 addons-444927 kubelet[1601]: E0210 12:37:26.574071 1601 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b471bb609adc83f73c2d95148cf1bd683408739a3c09c0afc666ea2af0037aef: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
Feb 10 12:37:26 addons-444927 kubelet[1601]: E0210 12:37:26.574146 1601 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b471bb609adc83f73c2d95148cf1bd683408739a3c09c0afc666ea2af0037aef: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
Feb 10 12:37:26 addons-444927 kubelet[1601]: E0210 12:37:26.574292 1601 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:nginx,Image:docker.io/nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j2nr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nginx_defaul
t(7fc4a8a7-2d16-43e2-8d8d-6d7a34c198a0): ErrImagePull: failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b471bb609adc83f73c2d95148cf1bd683408739a3c09c0afc666ea2af0037aef: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
Feb 10 12:37:26 addons-444927 kubelet[1601]: E0210 12:37:26.575498 1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b471bb609adc83f73c2d95148cf1bd683408739a3c09c0afc666ea2af0037aef: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="7fc4a8a7-2d16-43e2-8d8d-6d7a34c198a0"
Feb 10 12:37:39 addons-444927 kubelet[1601]: E0210 12:37:39.682878 1601 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/busybox:stable\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:71b79694b71639e633452f57fd9de40595d524de308349218d9a6a144b40be02: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="busybox:stable"
Feb 10 12:37:39 addons-444927 kubelet[1601]: E0210 12:37:39.682952 1601 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/busybox:stable\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:71b79694b71639e633452f57fd9de40595d524de308349218d9a6a144b40be02: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="busybox:stable"
Feb 10 12:37:39 addons-444927 kubelet[1601]: E0210 12:37:39.683081 1601 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:busybox,Image:busybox:stable,Command:[sh -c echo 'local-path-provisioner' > /test/file1],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:data,ReadOnly:false,MountPath:/test,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qvtsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil
,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-local-path_default(99e0a41e-dea7-4fc3-a083-fa0680179d33): ErrImagePull: failed to pull and unpack image \"docker.io/library/busybox:stable\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:71b79694b71639e633452f57fd9de40595d524de308349218d9a6a144b40be02: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
Feb 10 12:37:39 addons-444927 kubelet[1601]: E0210 12:37:39.684278 1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:71b79694b71639e633452f57fd9de40595d524de308349218d9a6a144b40be02: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="99e0a41e-dea7-4fc3-a083-fa0680179d33"
Feb 10 12:37:40 addons-444927 kubelet[1601]: E0210 12:37:40.694782 1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b471bb609adc83f73c2d95148cf1bd683408739a3c09c0afc666ea2af0037aef: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="7fc4a8a7-2d16-43e2-8d8d-6d7a34c198a0"
Feb 10 12:37:50 addons-444927 kubelet[1601]: E0210 12:37:50.694718 1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:71b79694b71639e633452f57fd9de40595d524de308349218d9a6a144b40be02: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="99e0a41e-dea7-4fc3-a083-fa0680179d33"
Feb 10 12:37:53 addons-444927 kubelet[1601]: E0210 12:37:53.694391 1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b471bb609adc83f73c2d95148cf1bd683408739a3c09c0afc666ea2af0037aef: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="7fc4a8a7-2d16-43e2-8d8d-6d7a34c198a0"
Feb 10 12:38:03 addons-444927 kubelet[1601]: E0210 12:38:03.694635 1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:71b79694b71639e633452f57fd9de40595d524de308349218d9a6a144b40be02: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="99e0a41e-dea7-4fc3-a083-fa0680179d33"
Feb 10 12:38:05 addons-444927 kubelet[1601]: I0210 12:38:05.693890 1601 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
Feb 10 12:38:07 addons-444927 kubelet[1601]: E0210 12:38:07.694145 1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b471bb609adc83f73c2d95148cf1bd683408739a3c09c0afc666ea2af0037aef: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="7fc4a8a7-2d16-43e2-8d8d-6d7a34c198a0"
Feb 10 12:38:15 addons-444927 kubelet[1601]: E0210 12:38:15.694191 1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:71b79694b71639e633452f57fd9de40595d524de308349218d9a6a144b40be02: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="99e0a41e-dea7-4fc3-a083-fa0680179d33"
Feb 10 12:38:22 addons-444927 kubelet[1601]: E0210 12:38:22.694797 1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b471bb609adc83f73c2d95148cf1bd683408739a3c09c0afc666ea2af0037aef: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="7fc4a8a7-2d16-43e2-8d8d-6d7a34c198a0"
Feb 10 12:38:30 addons-444927 kubelet[1601]: E0210 12:38:30.694156 1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:71b79694b71639e633452f57fd9de40595d524de308349218d9a6a144b40be02: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="99e0a41e-dea7-4fc3-a083-fa0680179d33"
Feb 10 12:38:37 addons-444927 kubelet[1601]: E0210 12:38:37.694741 1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b471bb609adc83f73c2d95148cf1bd683408739a3c09c0afc666ea2af0037aef: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="7fc4a8a7-2d16-43e2-8d8d-6d7a34c198a0"
Feb 10 12:38:44 addons-444927 kubelet[1601]: E0210 12:38:44.694849 1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:71b79694b71639e633452f57fd9de40595d524de308349218d9a6a144b40be02: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="99e0a41e-dea7-4fc3-a083-fa0680179d33"
Feb 10 12:38:52 addons-444927 kubelet[1601]: E0210 12:38:52.754711 1601 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
Feb 10 12:38:52 addons-444927 kubelet[1601]: E0210 12:38:52.754777 1601 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
Feb 10 12:38:52 addons-444927 kubelet[1601]: E0210 12:38:52.754886 1601 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:nginx,Image:docker.io/nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j2nr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nginx_defaul
t(7fc4a8a7-2d16-43e2-8d8d-6d7a34c198a0): ErrImagePull: failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
Feb 10 12:38:52 addons-444927 kubelet[1601]: E0210 12:38:52.756115 1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="7fc4a8a7-2d16-43e2-8d8d-6d7a34c198a0"
Feb 10 12:38:58 addons-444927 kubelet[1601]: E0210 12:38:58.694350 1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:71b79694b71639e633452f57fd9de40595d524de308349218d9a6a144b40be02: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="99e0a41e-dea7-4fc3-a083-fa0680179d33"
==> storage-provisioner [7bef7a777b3e3d6550f446e15e90a6819656264468867661440ae2788e0f6aaa] <==
I0210 12:33:23.586020 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0210 12:33:23.597839 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0210 12:33:23.597886 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0210 12:33:23.605184 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0210 12:33:23.605348 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-444927_c2111d1a-8855-4840-ba1d-d84eee9e2148!
I0210 12:33:23.605918 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"32bfaebe-00fa-401b-b378-8aa3da4fba33", APIVersion:"v1", ResourceVersion:"605", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-444927_c2111d1a-8855-4840-ba1d-d84eee9e2148 became leader
I0210 12:33:23.706462 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-444927_c2111d1a-8855-4840-ba1d-d84eee9e2148!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-444927 -n addons-444927
helpers_test.go:261: (dbg) Run: kubectl --context addons-444927 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx test-local-path ingress-nginx-admission-create-zsvgr ingress-nginx-admission-patch-kwk2g
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context addons-444927 describe pod nginx test-local-path ingress-nginx-admission-create-zsvgr ingress-nginx-admission-patch-kwk2g
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-444927 describe pod nginx test-local-path ingress-nginx-admission-create-zsvgr ingress-nginx-admission-patch-kwk2g: exit status 1 (69.8659ms)
-- stdout --
Name: nginx
Namespace: default
Priority: 0
Service Account: default
Node: addons-444927/192.168.49.2
Start Time: Mon, 10 Feb 2025 12:36:04 +0000
Labels: run=nginx
Annotations: <none>
Status: Pending
IP: 10.244.0.33
IPs:
IP: 10.244.0.33
Containers:
nginx:
Container ID:
Image: docker.io/nginx:alpine
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j2nr6 (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-j2nr6:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m default-scheduler Successfully assigned default/nginx to addons-444927
Warning Failed 98s (x4 over 2m58s) kubelet Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b471bb609adc83f73c2d95148cf1bd683408739a3c09c0afc666ea2af0037aef: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal BackOff 27s (x9 over 2m58s) kubelet Back-off pulling image "docker.io/nginx:alpine"
Warning Failed 27s (x9 over 2m58s) kubelet Error: ImagePullBackOff
Normal Pulling 13s (x5 over 2m59s) kubelet Pulling image "docker.io/nginx:alpine"
Warning Failed 12s (x5 over 2m58s) kubelet Error: ErrImagePull
Warning Failed 12s kubelet Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Name: test-local-path
Namespace: default
Priority: 0
Service Account: default
Node: addons-444927/192.168.49.2
Start Time: Mon, 10 Feb 2025 12:36:01 +0000
Labels: run=test-local-path
Annotations: <none>
Status: Pending
IP: 10.244.0.32
IPs:
IP: 10.244.0.32
Containers:
busybox:
Container ID:
Image: busybox:stable
Image ID:
Port: <none>
Host Port: <none>
Command:
sh
-c
echo 'local-path-provisioner' > /test/file1
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/test from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qvtsj (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: test-pvc
ReadOnly: false
kube-api-access-qvtsj:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m3s default-scheduler Successfully assigned default/test-local-path to addons-444927
Warning Failed 3m1s kubelet Failed to pull image "busybox:stable": failed to pull and unpack image "docker.io/library/busybox:stable": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:afa67e3cea50ce204060a6c0113bd63cb289cc0f555d5a80a3bb7c0f62b95add: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal Pulling 86s (x4 over 3m2s) kubelet Pulling image "busybox:stable"
Warning Failed 85s (x4 over 3m1s) kubelet Error: ErrImagePull
Warning Failed 85s (x3 over 2m46s) kubelet Failed to pull image "busybox:stable": failed to pull and unpack image "docker.io/library/busybox:stable": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:71b79694b71639e633452f57fd9de40595d524de308349218d9a6a144b40be02: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal BackOff 6s (x11 over 3m1s) kubelet Back-off pulling image "busybox:stable"
Warning Failed 6s (x11 over 3m1s) kubelet Error: ImagePullBackOff
-- /stdout --
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-zsvgr" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-kwk2g" not found
** /stderr **
helpers_test.go:279: kubectl --context addons-444927 describe pod nginx test-local-path ingress-nginx-admission-create-zsvgr ingress-nginx-admission-patch-kwk2g: exit status 1
addons_test.go:992: (dbg) Run: out/minikube-linux-amd64 -p addons-444927 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/LocalPath (188.03s)